text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I think a lot of the confusion comes from this thread : Particularly because the wiki was updated with wrong information, not maliciously I'm sure. This information is now gone for good. Otis, your solution is pretty much like the one given by Dhruba Borthakur and augmented by Konstantin Shvachko later in the thread but I never did it myself. One thing should be clear though, the NN is and will remain a SPOF (just like HBase's Master) as long as a distributed manager service (like Zookeeper) is not plugged into Hadoop to help with failover. J-D On Wed, Oct 29, 2008 at 2:12 AM, Otis Gospodnetic < otis_gospodnetic@yahoo.com> wrote: > Hi, > So what is the "recipe" for avoiding NN SPOF using only what comes with > Hadoop? > > From what I can tell, I think one has to do the following two things: > > 1) configure primary NN to save namespace and xa logs to multiple dirs, one > of which is actually on a remotely mounted disk, so that the data actually > lives on a separate disk on a separate box. This saves namespace and xa > logs on multiple boxes in case of primary NN hardware failure. > > 2) configure secondary NN to periodically merge fsimage+edits and create > the fsimage checkpoint. This really is a second NN process running on > another box. It sounds like this secondary NN has to somehow have access to > fsimage & edits files from the primary NN server. > not describe the best practise around that - the recommended way to > give secondary NN access to primary NN's fsimage and edits files. Should > one mount a disk from the primary NN box to the secondary NN box to get > access to those files? Or is there a simpler way? > In any case, this checkpoint is just a merge of fsimage+edits files and > again is there in case the box with the primary NN dies. That's what's > described on > or less. > > Is this sufficient, or are there other things one has to do to eliminate NN > SPOF? > > > Thanks, > Otis > -- > Sematext -- -- Lucene - Solr - Nutch > > > > ----- Original Message ---- > > From: Jean-Daniel Cryans <jdcryans@apache.org> > > To: core-user@hadoop.apache.org > > Sent: Tuesday, October 28, 2008 8:14:44 PM > > Subject: Re: SecondaryNameNode on separate machine > > > > Tomislav. > > > > Contrary to popular belief the secondary namenode does not provide > failover, > > it's only used to do what is described here : > > > > > > > So the term "secondary" does not mean "a second one" but is more like "a > > second part of". > > > > J-D > > > > On Tue, Oct 28, 2008 at 9:44 AM, Tomislav Poljak wrote: > > > > > Hi, > > > I'm trying to implement NameNode failover (or at least NameNode local > > > data backup), but it is hard since there is no official documentation. > > > Pages on this subject are created, but still empty: > > > > > > > > > > > > > > > I have been browsing the web and hadoop mailing list to see how this > > > should be implemented, but I got even more confused. People are asking > > > do we even need SecondaryNameNode etc. (since NameNode can write local > > > data to multiple locations, so one of those locations can be a mounted > > > disk from other machine). I think I understand the motivation for > > > SecondaryNameNode (to create a snapshoot of NameNode data every n > > > seconds/hours), but setting (deploying and running) SecondaryNameNode > on > > > different machine than NameNode is not as trivial as I expected. First > I > > > found that if I need to run SecondaryNameNode on other machine than > > > NameNode I should change masters file on NameNode (change localhost to > > > SecondaryNameNode host) and set some properties in hadoop-site.xml on > > > SecondaryNameNode (fs.default.name, fs.checkpoint.dir, > > > fs.checkpoint.period etc.) > > > > > > This was enough to start SecondaryNameNode when starting NameNode with > > > bin/start-dfs.sh , but it didn't create image on SecondaryNameNode. > Then > > > I found that I need to set dfs.http.address on NameNode address (so now > > > I have NameNode address in both fs.default.name and dfs.http.address). > > > > > > Now I get following exception: > > > > > > 2008-10-28 09:18:00,098 ERROR NameNode.Secondary - Exception in > > > doCheckpoint: > > > 2008-10-28 09:18:00,098 ERROR NameNode.Secondary - > > > java.net.SocketException: Unexpected end of file from server > > > > > > My questions are following: > > > How to resolve this problem (this exception)? > > > Do I need additional property in SecondaryNameNode's hadoop-site.xml or > > > NameNode's hadoop-site.xml? > > > > > > How should NameNode failover work ideally? Is it like this: > > > > > > SecondaryNameNode runs on separate machine than NameNode and stores > > > NameNode's data (fsimage and fsiedits) locally in fs.checkpoint.dir. > > > When NameNode machine crashes, we start NameNode on machine where > > > SecondaryNameNode was running and we set dfs.name.dir to > > > fs.checkpoint.dir. Also we need to change how DNS resolves NameNode > > > hostname (change from the primary to the secondary). > > > > > > Is this correct ? > > > > > > Tomislav > > > > > > > > > > >
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200810.mbox/%3C31a243e70810290608j21aa6e03v8bf1c4dd49bdeea2@mail.gmail.com%3E
CC-MAIN-2018-13
refinedweb
795
54.73
I have a program to write: Wrtie a class MPG, which has two instance variables miles and gallons, three instance methods printmpg(), setmiles(float m) and setgallons (float g) and aconstructor MPG (float m, float g). (i) printMPG() - calculate and prints the miles per gallon based on the values held in the instance variables miles and gallons - mpg=miles/gallons. (ii)set miles (float m) - set the miles i.e set miles to the value of the parameter passed (iii) setgallons(float g) - set the gallons i.e sets gallons to the value of the parameter passed (iv) MPG (Float m, float g) - the constructor which sets the instance variable miles and gallons. This is what I have and I dont know what to do after. I am completely lose. Please help someone... import java.io; class MPG { //instance variable int miles; int gallons; //constructor MPG (float m, float g) { miles=m; gallons=g; } //methods double PrintMPG() { return miles/gallons; } I am really really lost.. Please someone help. I know this is a simple program... Thanks...
http://www.javaprogrammingforums.com/whats-wrong-my-code/5956-help-my-program-im-new.html
CC-MAIN-2013-48
refinedweb
176
73.17
Get the highlights in your inbox every week. Mutation testing is the evolution of TDD | Opensource.com Mutation testing is the evolution of TDD Since test-driven development is modeled on how nature works, mutation testing is the natural next step in the evolution of DevOps. Subscribe now In "Failure is a feature in blameless DevOps," I discussed the central role of failure in delivering quality by soliciting feedback. This is the failure agile DevOps teams rely on to guide them and drive development. Test-driven development (TDD) is the conditio sine qua non of any agile DevOps value stream delivery. Failure-centric TDD methodology only works if it is paired with measurable tests. TDD methodology is modeled on how nature works and how nature produces winners and losers in the evolutionary game. Natural selection In 1859, Charles Darwin proposed the theory of evolution in his book On the Origin of Species. Darwin's thesis was that natural variability is caused by the combination of spontaneous mutations in individual organisms and environmental pressures. These pressures eliminate less-adapted organisms while favoring other, more fit organisms. Each and every living being mutates its chromosomes, and those spontaneous mutations are carried to the next generation (the offspring). The newly emerged variability is then tested under natural selection—the environmental pressures that exist due to the variability of environmental conditions. This simplified diagram illustrates the process of adjusting to environmental conditions. environmentalconditions2.png Fig. 1. Different environmental pressures result in different outcomes governed by natural selection. Image screenshot from a video by Richard Dawkins. This illustration shows a school of fish in their natural habitat. The habitat varies (darker or lighter gravel at the bottom of the sea or riverbed), as does each fish (darker or lighter body patterns and colors). It also shows two situations (i.e., two variations of the environmental pressure): - The predator is present - The predator is absent In the first situation, fish that are easier to spot against the gravel shade are at higher risk of being picked off by predators. When the gravel is darker, the lighter portion of the fish population is thinned out. And vice versa—when the gravel is a lighter shade, the darker portion of the fish population suffers the thinning out scenario. In the second situation, fish are sufficiently relaxed to engage in mating. In the absence of predators and in the presence of the mating ritual, the opposite results can be expected: The fish that stand out against the background have a better chance of being picked for mating and transferring their characteristics to offspring. Selection criteria When selecting among variability, the process is never arbitrary, capricious, whimsical, nor random. The decisive factor is always measurable. That decisive factor is usually called a test or a goal. A simple mathematical example can illustrate this process of decision making. (Only in this case it won't be governed by natural selection, but by artificial selection.) Suppose someone asks you to build a little function that will take a positive number and calculate that number's square root. How would you go about doing that? The agile DevOps way is to fail fast. Start with humility, admitting upfront that you don't really know how to develop that function. All you know, at this point, is how to describe what you'd like to do. In technical parlance, you are ready to engage in crafting a unit test. "Unit test" describes your specific expectation. It could simply be formulated as "given the number 16, I expect the square root function to return number 4." You probably know that the square root of 16 is 4. However, you don't know the square root for some larger numbers (such as 533). At the very least, you have formulated your selection criteria, your test or goal. Implement the failing test The .NET Core platform can illustrate the implementation. .NET typically uses xUnit.net as a unit-testing framework. (To follow the coding examples, please install .NET Core and xUnit.net.) Open the command line and create a folder where your square root solution will be implemented. For example, type: mkdir square_root Then type: cd square_root Create a separate folder for unit tests: mkdir unit_tests Move into the unit_tests folder (cd unit_tests) and initiate the xUnit framework: dotnet new xunit Now, move one folder up to the square_root folder, and create the app folder: mkdir app cd app Create the scaffold necessary for the C# code: dotnet new classlib Now open your favorite editor and start cracking! In your code editor, navigate to the unit_tests folder and open UnitTest1.cs. Replace auto-generated code in UnitTest1.cs with: using System; using Xunit; using app; namespace unit_tests{ public class UnitTest1{ Calculator calculator = new Calculator(); [Fact] public void GivenPositiveNumberCalculateSquareRoot(){ var expected = 4; var actual = calculator.CalculateSquareRoot(16); Assert.Equal(expected, actual); } } } This unit test describes the expectation that the variable expected should be 4. The next line describes the actual value. It proposes to calculate the actual value by sending a message to the component called calculator. This component is described as capable of handling the CalculateSquareRoot message by accepting a numeric value. That component hasn't been developed yet. But it doesn't really matter, because this merely describes the expectations. Finally, it describes what happens when the message is triggered to be sent. At that point, it asserts whether the expected value is equal to the actual value. If it is, the test passed and the goal is reached. If the expected value isn't equal to the actual value, the test fails. Next, to implement the component called calculator, create a new file in the app folder and call it Calculator.cs. To implement a function that calculates the square root of a number, add the following code to this new file: namespace app { public class Calculator { public double CalculateSquareRoot(double number) { double bestGuess = number; return bestGuess; } } } Before you can test this implementation, you need to instruct the unit test how to find this new component (Calculator). Navigate to the unit_tests folder and open the unit_tests.csproj file. Add the following line in the <ItemGroup> code block: <ProjectReference Include="../app/app.csproj" /> Save the unit_test.csproj file. Now you are ready for your first test run. Go to the command line and cd into the unit_tests folder. Run the following command: dotnet test Running the unit test will produce the following output: xunit-output.png Fig. 2. xUnit output after the unit test run fails. As you can see, the unit test failed. It expected that sending number 16 to the calculator component would result in the number 4 as the output, but the output (the actual value) was the number 16. Congratulations! You have created your first failure. Your unit test provided strong, immediate feedback urging you to fix the failure. Fix the failure To fix the failure, you must improve bestGuess. Right now, bestGuess merely takes the number the function receives and returns it. Not good enough. But how do you figure out a way to calculate the square root value? I have an idea—how about looking at how Mother Nature solves problems. Emulate Mother Nature by iterating It is extremely hard (pretty much impossible) to guess the correct value from the first (and only) attempt. You must allow for several attempts at guessing to increase your chances of solving the problem. And one way to allow for multiple attempts is to iterate. To iterate, store the bestGuess value in the previousGuess variable, transform the bestGuess value, and compare the difference between the two values. If the difference is 0, you solved the problem. Otherwise, keep iterating. Here is the body of the function that produces the correct value for the square root of any positive number: double bestGuess = number; double previousGuess; do { previousGuess = bestGuess; bestGuess = (previousGuess + (number/previousGuess))/2; } while((bestGuess - previousGuess) != 0); return bestGuess; This loop (iteration) converges bestGuess values to the desired solution. Now your carefully crafted unit test passes! unit-test-success.png Fig. 3. Unit test successful, 0 tests failed. The iteration solves the problem Just like Mother Nature's approach, in this exercise, iteration solves the problem. An incremental approach combined with stepwise refinement is the guaranteed way to arrive at a satisfactory solution. The decisive factor in this game is having a measurable goal and test. Once you have that, you can keep iterating until you hit the mark. Now the punchline! OK, this was an amusing experiment, but the more interesting discovery comes from playing with this newly minted solution. Until now, your starting bestGuess was always equal to the number the function receives as the input parameter. What happens if you change the initial bestGuess? To test that, you can run a few scenarios. First, observe the stepwise refinement as the iteration loops through a series of guesses as it tries to calculate the square root of 25: iterating-square-root.png Fig. 4. Iterating to calculate the square root of 25. Starting with 25 as the bestGuess, it takes eight iterations for the function to calculate the square root of 25. But what would happen if you made a comical, ridiculously wrong stab at the bestGuess? What if you started with a clueless second guess, that 1 million might be the square root of 25? What would happen in such an obviously erroneous situation? Would your function be able to deal with such idiocy? Take a look at the horse's mouth. Rerun the scenario, this time starting from 1 million as the bestGuess: bestguess.png Fig. 5. Stepwise refinement when calculating the square root of 25 by starting with 1 million as the initial bestGuess. Oh wow! Starting with a ludicrously large number, the number of iterations only tripled (from eight iterations to 23). Not nearly as dramatic an increase as you might intuitively expect. The moral of the story The Aha! moment arrives when you realize that, not only is iteration guaranteed to solve the problem, but it doesn't matter whether your search for the solution begins with a good or a terribly botched initial guess. However erroneous your initial understanding, the process of iteration, coupled with a measurable test/goal, puts you on the right track and delivers the solution. Guaranteed. Figures 4 and 5 show a steep and dramatic burndown. From a wildly incorrect starting point, the iteration quickly burns down to an absolutely correct solution. This amazing methodology, in a nutshell, is the essence of agile DevOps. Back to some high-level observations Agile DevOps practice stems from the recognition that we live in a world that is fundamentally based on uncertainty, ambiguity, incompleteness, and a healthy dose of confusion. From the scientific/philosophical point of view, these traits are well documented and supported by Heisenberg's Uncertainty Principle (covering the uncertainty part), Wittgenstein's Tractatus Logico-Philosophicus (the ambiguity part), Gödel's incompleteness theorems (the incompleteness aspect), and the Second Law of Thermodynamics (the confusion caused by relentless entropy). In a nutshell, no matter how hard you try, you can never get complete information when trying to solve any problem. It is, therefore, more profitable to abandon an arrogant stance and adopt a more humble approach to solving problems. Humility pays big dividends in rewarding you—not only with the hoped-for solution but also with the byproduct of a well-structured solution. Conclusion Nature works incessantly—it's a continuous flow. Nature has no master plan; everything happens as a response to what happened earlier. The feedback loops are very tight, and apparent progress/regress is piecemeal. Everywhere you look in nature, you see stepwise refinement, in one shape or form or another. Agile DevOps is a very interesting outcome of the engineering model's gradual maturation. DevOps is based on the recognition that the information you have available is always incomplete, so you'd better proceed cautiously. Obtain a measurable test (e.g., a hypothesis, a measurable expectation), make a humble attempt at satisfying it, most likely fail, then collect the feedback, fix the failure, and continue. There is no plan other than agreeing that, with each step of the way, there must be a measurable hypothesis/test. In the next article in this series, I'll take a closer look at how mutation testing provides much-needed feedback that drives value. 9 Comments Interesting read Alex! The concept of mutation is well explained. Thanks for sharing Thanks for your comment. If you stay tuned, in the next article I'll delve deeper into the mutation testing, which is the final frontier when it comes to ensuring the highest possible quality of precision engineering in software. Very interesting comparison. "When selecting among variability, the process is never arbitrary, capricious, whimsical, nor random. The decisive factor is always measurable. That decisive factor is usually called a test or a goal." That is of course not entirely accurate. The mutation process, which in itself is rarely giving a viable and reproducible alternative, is entirely random by definition, and thus arbitrary and very capricious. But there is a 2nd pass that, with a lot of time and provided there is an appropriate threat level, and progressively gives an advantage to any form that is more appropriate. Is that sieve so tight? Since there are usually several different threats, and not necessarily at the same time, the 2nd pass is also quite arbitrary, and may give different outcome depending on the timing alone, or on the presence of other species, environmental changes, and so on. The "selected solutions" we can see are a multitude, in order to escape a predator, some animals run faster, other climb higher, and yet others are good at hiding. If predator2 chases away predator1 just because it's heavier and more powerful, the "solutions" may still save those mutations, but if predator2 has a better sight and outdated predator1 thanks to his ability at feeding (again due to a random mutation), this will invalidate the hiding ability of some preys. In development projets, clearly the test selects viable solutions, which themselves can vary depending on whoever wrote the code. The order of the tests, their depth and their coverage also plays a role in shaping the outcome, then it also depends on who is writing them. All that is also deeply impacted by the environment, the culture, the available tools and libraries, the available time and resources. In conclusion, the predictability and mathematical nature of both processes - natural selection and development, are not that obvious. They're evolutionary and by that, opportunistic, a little difference in the timing or environment may give entirely different outcomes. That's why, facing the same problem, two companies will come up with different solutions and compete. TDD or "traditional" testing will both make sure the solution behaves as expected. In either case tests can be forgotten, or rushed. The solution will perhaps be driven differently, but I doubt the choice would have any significant impact. If that were the case, methodology selection would have eradicated "traditional" validation approaches in favour of TDD long ago, since it's not a new concept after all. Interesting take on those challenges. You may have misunderstood the intent of the article. Unlike in the nature, iterations in software engineering are not random. They obey certain algorithm. The only thing that is random is the initial Best Guess; everything else following that initial guess is deterministic. Good read Alex. One question: How did you derived the "guess" bestGuess = (previousGuess + (number/previousGuess))/2; ? For me it look like you still need a pretty clear idea how to square root is calculated? If you want to play the engineering game, you have some qualifications. In the case of software engineering, you must join the game by having an arsenal of algorithms in your back pocket. In this case, I reached out for the oldest algorithm in recorded history -- the Babylonian algorithm. But hoping to be able to blindly tackle engineering problems without any prior education is the fool's errand. Correction -- in the above reply, the first sentence should read: If you want to play the engineering game, you have to have some qualifications. Then I'm not getting it. Of course easy examples make sense to explain complex topics. But how does a proven-in-use, iterating algorithm compare to nature and evolution? In my understanding evolution is not an iterating algorithm but instead iterating THE algorithm (e.g. mutation). Changing over time the answer to a problem. There is a huge difference between nature and human invented engineering. Nature has all the time in the world, while engineering is severely constrained by the budget (time and money concerns). We could theoretically devise a system where the Best Guess for the square root of the supplied positive number is just some random number, then check to see if it passes the test, and if not iterate but taking another wild, random guess. Eventually, given unlimited time in the Universe, the random guess will be correct. You know the saying "trillion monkeys, each sitting at a typewriter, will eventually produce complete works of Shakespeare". Given enough time, anything is possible. But in the world of engineering, we win by winning the race with time. The first mover advantage and all that jazz.
https://opensource.com/article/19/8/mutation-testing-evolution-tdd
CC-MAIN-2020-05
refinedweb
2,899
55.74
This is a companion article for the presentation given during LinuxCon Europe 2012, held in Barcelona. Here’s a link with the slides. The case for automated testing It’s hard to keep track of problems on a fast moving target. Specially on a *very* fast moving target, such as the Linux kernel. Thousands of commits get into the kernel git repos every week, and although the subsystem maintainers are very careful and competent, they can’t predict or catch every problem that might get into the tree, so the more testing we give patchsets proposed to linux, the better. But it’s difficult for the lone kernel developer to test all his/her patches systematically. On this article, we’ll discuss using the autotest project () to assemble a test grid that kernel developers can use to thest their own git repos, helping to catch bugs earlier and improving the overall quality of the kernel. Why to avoid reliance on user testing only As we can see from the diagram above Linux’s development model forms an hourglass starting highly distributed, with contributions being concentrated in maintainer trees before merging into the development releases. Autotest Autotest is an open source project that was designed to test the linux kernel, although it can be used to test userspace programs just fine. It’s released under the GPLv2, and comprises a number of tools and libraries that people can use to automate their testing workflows. You can find a complete overview of autotest and quick links to its resources (code, documentation, issue tracker): On a high level description, the autotest structure can be represented by the following diagram: Autotest is composed by the following main modules: Installing autotest In order to install autotest, you’ll need: - 2 machines, one that will serve as an autotest server, another that will serve as an autotest client. You can use virtual machines, if you want to experiment first before deploying on your actual bare metal machines. In fact, this presentation’s demo will be made using 2 Fedora 17 virtual machines. - Download the server install script that comes with autotest: curl; install-autotest-server.sh As the script help will tell you, the distros on which this script is supposed to work are: - Fedora 16 - Fedora 17 - RHEL 6.2 - Ubuntu 12.04 Once downloaded, you can give the script permissions: chmod +x install-autotest-server.sh Then execute it, passing the autotest user password (-u) and the database admin password (-d): ./install-autotest-server.sh -u linuxcon -d linuxcon The script is supposed to install any missing packages. For more information on the install procedure (what it does, troubleshooting), see: Adding initial configuration Once you have the autotest server installed, you can go to the web interface admin interface (top right corner, Admin) and add: - In the tab Labels: A platform label, say, x86_64 - In the tab Hosts: A host, using the ip of your client machine, using the machine you’ve installed in the previous step With this, we’re close to our goal. Setting passwordless SSH between client and server It’s important for the tests to work that you have ssh connection among the server machine and the client: server machine @autotest -> client machine @root So, log on the server as autotest (you did chose the password when running the script). Make sure you create an ssh key: ssh-keygen -t rsa You may leave passphrases empty. Then, copy the key to the client ssh-copy-id root@your-client-ip Type your client password and there, you’re done. Please verify if ssh root@your-client-ip Logs you directly as root on the client. Set up cobbler if you have it Just go to global_config.ini and add the appropriate cobbler install server data (copied it as a reference) [INSTALL_SERVER] # Install server type type: cobbler # URL for xmlrpc_server, such as xmlrpc_url: # XMLRPC user, in case the server requires authentication user: # XMLRPC password, in case the server requires authentication # Profile to install by default on your client machine fallback_profile: # Number of installation/reset attempts before failing it altogether num_attempts: 2 Send your first job Go to “Create Job” tab on the web interface, create your first job with a simple client side sleeptest, without kernel install. You have to: - Specify job name - Check the ‘Sleeptest’ checkbox - In ‘Browse Hosts’, select your client - And that’s it. Hit the ‘Submit Job’ button After less than a minute, you should have your results. Hopefully there won’t be any major problems. More sophisticated job As we don’t want to run ‘hello world’ type of tests on a machine, soon you’ll find yourself going to more sophisticated jobs. Let’s talk about what is a job. A job in autotest terminology is a description of the operations that will be performed on a test machine, it is represented by a file, called control file. A control file in autotest might be of 2 types: - client side: Scope restricted to only the machine executing the tests. - server side: Broader scope, you can control multiple machines. Example: Here’s a client side control file used to build a kernel from a git repo. It was a tad simplified, you can have even more fine grained control of the entire process:') All the operations there are supposed to be executed on a single machine. Note that step_init and step_test are part of what we call ‘step engine’, that helps handle testing where reboots are necessary. In this case, we build a new kernel from a given git repo, then we reboot into this new kernel, then we run the ‘kernbench’ autotest module. Now, here’s an example of a server side control file, that accomplishes the same, except that we can install a machine with a given cobbler profile: control = '''') ''' def run(machine): host = hosts.create_host(machine) profile = 'Fedora17-x86_64' timeout = 3600 host.machine_install(profile=profile, timeout=timeout) at = autotest_remote.Autotest(host) at.run(control) job.parallel_simple(run, machines) We pretty much just run the client side control file inside the server side scope, through the method run() of the autotest_remote.Autotest() object. The idea here is to install the machine using cobbler, then reboot the machine and then executing the client side control file. This article is not supposed to dwelve into the details of the control files API, for more information you can check our documentation in control files: Controlling your machine: Conmux and Cobbler A big part of running a test grid is being able to remotely control and see what’s going on with the nodes of your grid. In autotest, we use 2 tools to control the following aspects of the machines: - Provisioning - Power cycle - Serial console And those tools are: - Conmux: Lives inside the autotest tree, it’s a console machine multiplexer, provides console access through the ‘console’ command. You can install and configure your autotest server as the conmux server, see: for more information. The idea is to: - Install conmux system wide - Place config files for your machines (see the examples dir on conmux dir) - Test your connection to the console/hardreset - Cobbler: It’s a separate set of tools to provision machines, you can see more info in Cobbler can handle machine power on/off/reset machines, so if you don’t need console control (your lab already provides console access), you can just use cobbler. Configuration in autotest is simple: - Fill global_config.ini with base xmlrpc URL and auth data - Fill your systems in cobbler with matching IPs/Hostnames with the autotest DB - Import distros in there to install - That should be it. All else can be handled by autotest and you only have to use the control file APIs or the web interface to choose profiles to install. Our limitations: Embedded As a result of the scenario we had back 7 years ago when autotest started, all the design was strongly based on intel and ppc architectures, so the functionalities: - choosing the kernel to boot - provisioning the machine with a distro - controlling power and remote consoles Work for intel, powerpc, S390, among others, but not for embedded systems using an SD card or other arrangements common for embedded. Autotest can still be used, if the facilities above are not needed, and it is an area where we need help from contributors to implement the missing parts. Recent work in autotest During the last year, we’ve made a lot of work to make the core framework development separate from test module development, as well as reorganized the development model: Now we have 3 git repos: - autotest (core) - autotest-client-tests - autotest-server-tests Each one has a master branch, with current stable code, next, that is the integration branch, regularly QA’ed to try catching problems as soon as possible, and release branches, for more long term maintenance code. Other things worth mentioning: - Major namespace cleanup: We bit the bullet and cleaned a lot of cruft from the early autotest days - Packaging work: Now autotest programs can also run on a system wide install, allowing distros to package and ship it - Stand alone RPC client, that will be the base for other applications that can use the autotest RPC server to submit jobs, view and query results. Roadmap We’re working to fill the gaps and make autotest better. Things we plan to do on the coming months: - Advanced machine scheduling (per machine hardware) - Evolve RPC applications - Better embedded support - Model automated bisecting You’re welcome to join us! Getting started and contact info Autotest code is just one git clone away: git clone git://github.com/autotest/autotest.git Check the testsuite documentation And let us know your findings. Please refer to: If you want to subscribe to our mailing list/join our IRC channel. Thanks! One thought on “Assembling a kernel test grid with autotest”
https://mybravenewworld.wordpress.com/2012/11/10/assembling-a-kernel-test-grid-with-autotest/
CC-MAIN-2020-40
refinedweb
1,653
53.34
Introduction: 16mm Bluetooth + 6/9 DOF Gyroscope Wearable Board Talking to Phone and Programming With Open Source Tools You are probably enough of large volume Arduino validation prototype and want you jump to next step -- building a mini/micro size real wearables. However, tools like AD to design your own PCB could be tricky and iterations of which burns money and time, esp., when you have a high requirement of the size, cost and modules. Tada! What if, in the air, there is a board, which is super small, thin and with bluetooth 4.0, gyro/acc. That sounds like intel Curie!! Yep, however, they just don't sell us and Curie looks like not thin as well. Oh, it will also be cool if you have wireless charging module!!! That will be great, right? I browsed all the mouser/digikey/amazon/ebay... and there's no way you can find such a thing. However, I just found this amazing board in Alibaba!!! It contains an nrf51822 ble chip, mpu6050 6/9-dof gyro/acc and a wireless charging module. What's more, the diameter is as small as 16mm and super thin. For the first time, when I saw this, I was like, oh, it could be just some bluffing .... HOWEVER!! After I purchased this, it actually WORKED!!! Finally, I finished my college project and my supervisor was really happy about my work!!! To thank the provider of the board, I am gonna share some steps of playing with that, esp., with free tools rather than J-link/St-link, which are expensive. Open source! Ya! Step 1: Step1: Preparing Materials 1, THSJ board (the 16mm BLE/Gyro/ACC/Wireless Charge) 2, 3V coin battery 3,MK20 (which will make it possible that you can program online and directly drag .hex files into your chip with MBED platform) 4, several jumper wires 5, soldering iron Step 2: Step2: Wiring There are 8 pad coming out on the board that we can solder with (see picture attached): 1) VCC 2) GND 3) SWDIO 4) SWCLK 5) RXD 6) TXD 7) CTS 8) RTS Connect (see picture attached and refer to :): Board --> MK 20 USB 1) VCC --> VCC 2) GND --> GNC 3) SWDIO --> RF_DIO 4) SWCLK --> RF_CLK 5) RXD--> TXD 6) TXD --> RXD 7) CTS --> RTS 8) RTS --> CTS Connect the board and MK20 with jumper wires with above instructions and plug in the MK20 into your mac or win. There will be a virtual drive appearing in Desktop. Bang!!!!! You've already finished half of them! Step 3: Step3: MBED and Firmware MBED is an IoT device platform, providing awesome free online programming environment. It's kinda like an arm based Arduino platform. Traditional way to program arm based processor like stm32, is that, we use uKeil or other SW, to write, debug and compile C code and then using J-link or ST-link to load the program into the chip. However, both the SW and the *-link will cost like several grand, which may be not affordable for students or early prototyping. With mbed, you can simply just program/compile online, and download the .hex files to your local drive. With mbed enabled dev board, once you plug in the usb interface, there will be a virtual drive appear in desktop. All you need to do left is to drag the .hex into the drive and reset. Tada!!!! Simple huh? So, MK20 USB is basically a mbed interface nrf programmer that you can directly drag the .hex file into the "Chip". Attached is a demo .hex file and blink_test.hex file that you can directly drag into the drive. Once you drag the .hex file into the drive, the drive will get ejected automatically. Don't worry, that's totally fine. For those who are using Mac OS X Yosemite, if you fail to load the .hex file, you can simply open your "Terminal" and input the following: sudo mount -u -w -o sync /Volumes/MBED; cp -X /* /Volumes/MBED * will be the path of your .hex file (usually in download like Users/(mac name)/Download/*.hex) You can use the blink_test .hex file to test whether you succeed loading the hex. Then you can confidently load the demo.hex file. blink_test code: #include "mbed.h" DigitalOut myled(P0_15); int main() { while(1) { myled = 1; wait_ms(80); myled = 0; wait_ms(80); } } Step 4: Step4: Play and Read Value With App For sure, if you can develop iOS/Android app or even Win/OS X app, that will be perfect. However, the thing is the learning curve of those looks like... not that realistic. Even though, as a Geek, we are supposed to master that. No worries and I am not gonna teach you how to dev app here. Instead, I will recommend any super powerful and free app for you to test. LightBlue is my favourite since it can directly read service and values easily. Officialy nRF Toolbox is cool however, kinda too complicated, esp., when you open the open source iOS app. Therefore, next, you can just download LightBlue from app store and you can see the data. If you have interests in developing app, you can hack nRF Toolbox with the official code. In Github, there are also many other good codes you can use. Discussions Has anybody else got their hand on this device, interested to hear another opinion regarding experience working with it.
https://www.instructables.com/id/16mm-Bluetooth-69-DOF-Gyroscope-Wearable-Board-tal/
CC-MAIN-2018-39
refinedweb
909
73.47
From: Matthias Schabel (boost_at_[hidden]) Date: 2007-03-26 18:14:45 Phil, Thanks for taking a look. > The units of the heat transfer coefficient are W C^-1 m^-2. As far > as I was able to see, I was not able to write anything like > > typedef quantity<power/temperature/area> heat_transfer_coefficient_t; If your compiler supports native typeof, you can just write this: typedef typeof(watts/kelvin/pow<2>(meter)) heat_transfer_coefficient_unit_type; typedef quantity<heat_transfer_coefficient_unit_type> heat_transfer_coefficient_t; > Rather, it was necessary for me to reduce W C^-1 m^-2 to basic > dimensions manually. This is not easy for me. I have to remember > phyics that I If and when auto type deduction becomes a reality in C++, this will all become much simpler. For the meantime, you can either use the typeof trick above or you can compute the unit in a small test program like this : std::cout << watts/kelvin/pow<2>(meter) << std::endl; to get the units : kg s^(-3) K^(-1) > so I concluded that the dimensions of power are mass length time^-3 > and hence of heat transfer coefficient are mass time^-3 > temperature^-1 length^-1, and wrote this: Congrats - you got it right. Here's a complete program that does what you want (to this point, anyway) : #include <iostream> #include <boost/units/io.hpp> #include <boost/units/unit.hpp> #include <boost/units/quantity.hpp> #include <boost/units/systems/si.hpp> using namespace boost::units; using namespace boost::units::SI; typedef quantity<power> power_t; typedef quantity<area> area_t; typedef quantity<temperature> temp_diff_t; typedef typeof(watts/kelvin/pow<2>(meter)) heat_transfer_coefficient_unit_type; typedef quantity<heat_transfer_coefficient_unit_type> heat_transfer_coefficient_t; int main() { const heat_transfer_coefficient_t watts_per_square_meter_per_kelvin = 1.0*watts/square_meter/kelvin; std::cout << heat_transfer_coefficient_unit_type() << std::endl << watts_per_square_meter_per_kelvin << std::endl << std::endl; return 0; } outputting kg s^(-3) K^(-1) 1 kg s^(-3) K^(-1) > typedef composite_dimension<mass_tag, > 1,time_tag,-3,temperature_tag,-1,length_tag,-1> > heat_transfer_coefficient_t; This is the correct dimension. You need to associate it with a unit system to get a unit : typedef unit<heat_transfer_coefficient_t,SI::system> si_htc_unit_t; > Defining the unit was easier: > > const heat_transfer_coefficient_t watts_per_square_meter_per_celcis > = watts / square_meter / kelvin; > > But this fails to compile, with this error: <snip> You tried to assign a unit to a quantity; quantities can be expressed as value_type*unit, so what you want to do is const quantity<si_htc_unit_t> watts_per_square_meter_per_celcius = 7.4*watts/square_meter/kelvin; > For this library to be useful for me, it needs to be quick to learn > and easy to apply. Working only with the included units it does > look like it would work reasonably well, but in the situation I > have described above it was quickly obvious that it was taking more > effort to apply it to my program than the benefit that would result. All I can say is that dimensional analysis and units is more complicated than it seems it ought to be... Compiler support for typeof and auto or emulation using Boost.Typeof goes a long way to simplify things. I am also happy to add more commonly used units to the library as they crop up... > 1. What is the effect on compile time? I used #if to switch > between units and plain floats. Since the work is done with metaprogramming on lists, the overhead depends on how numerous and how complex the units used are. Steven's work on the implementation side has made, I believe, the compile times quite manageable for the examples we present, even those that use many different units. > 2. Is there any increase in object file size? I know there > shouldn't be, but it would be interesting to know for sure. Steven seems to have addressed this. > 3. Are the error messages comprehensible? The one show above is > not great, but it could have been worse; there are some Boost > libraries which I find unusable because of the volumne and > inpenetrability of the error messages that result from a simple > typo (though the compiler must share the blame for this). We've done what we could to make error messages occur near the point of the actual error and be as reasonable as possible, but, as with most template metaprogramming code, errors can't really be described as terse. If you're doing a lot of programming with units, they begin to become easier to decipher... > - "io" is a misnomer since it only does output, as far as I can see. Good point. We'll have to think of a better name, I guess... > - I tend to refer to temperature differences in Celcius, rather > than Kelvin. There is an obvious issue when dealing with absolute > temperatures though. Take a look at unit_example_20 - this deals specifically with the absolute/relative temperature conversion issue... > - Please don't call the units used in the U.S. "English" units. > Here in England, we use the S.I. system for everything except pints > of beer and miles on roadsigns. The units that we did use here > until about 50 years ago were not the same as the ones that the > Americans use. In boost/units/systems/other/non_si_units.hpp I have included conversion factors for Imperial units (the ones used in the recent past in England), US Customary units, US Survey units, nautical units, some non-SI metric units, etc... English units generally refer to heterogeneous and obsolete units, while Imperial and US Customary are the main non-SI systems still in some use... See here, also : > I hope this helps. I will not express an opinion about whether the > library should be included; I leave that to people who have made > more progress with it. Thank you for your input and comments. If you do find some more time to work with it, I would certainly appreciate any additional input you might have. Matthias Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/03/118843.php
CC-MAIN-2020-29
refinedweb
983
53.31
An interface to communicate to users via touch using heat/cold and vibration. Heat/Cold/Vibration interface for situational awareness for firefighters, police, military, transportation, gaming, etc. An interface to communicate to users via touch using heat/cold and vibration. Okay so for the first real demo in action we have the shirt wired up with vibration motors to alert the wearer of a fire coming up from behind them. Here is the video. Now that the system is build and the vibration motors are all working well, I have added serial control so that I can have an external device tell the arduino when to activate the motors. For this prototype I am going to add a Raspberry Pi Zero W which will communicate with the Arduino via serial and give it commands of when to activate the motors. My arduino is setup so when it gets an "R" , "C" or "L" on the serial port it will activate the right center or left vibration motor respectively. The arduino code is uploaded. A python example is shown below for controlling it from my windows machine. import serial s = serial.Serial('COM4') s.write('R') #Activate Right Motor s.write('L') #Activate Left motor s.write('C') #Activate Center Motor s.write('RRCCLLLLCCRRCCLLLLCCRR') #Back Massage s.write('0') # Off Okay so here is the first test of the wearable system. This system has 3 peltier heater/coolers and 3 vibration motors. They are located on each shoulder and at the base of the neck. For this test it cycles through each motor and then finally the center peltier module is put into cooling for a second and then heating for a second. The code for this test is uploaded on the project page. Since the peltier modules are in pockets, one second of heating and cooling is not obvious to the user. At this point, I am not sure if these will work for a user interface. Unlike the motors which are easily detected to be on and off in a fraction of a second, it takes some time for the peltiers to heat up and cool down. That time is even longer with a little fabric between the user and the module. Another issue with the peltier modules is that they will work much better when I give them a good heatsink on the back. For police, military, gamers and drivers having heatsinks on your shoulders is weird but seems do-able. For firefighters having a heatsink into your insulated gear doesn't help much and having a heatsink into a fire environment would be a really bad idea. Okay, so I got the first (very rough) version working. To control the Peltier heating/cooling modules I ended up going with a Arduino Mega 2560 that provides PWM to an Adafruit Motor control shield. Since peltier modules should have a constant DC current instead of PWM, I added some 47 uF capacitors to help smooth out the PWM signals. Next week, I'll take them back to the lab at school to check to see how much good the capacitors are actually doing. The motor control shield is handy because it controls 4 motors and does forward and reverse. For the peltier modules forward and reverse switch which side is hot and which side is cold. Since the vibration motors draw about 100 mA at 5V, they can't be driven directly by the Arduino either. So I built a little board with 5 transistors on it that are used to switch on and off the vibration motors. The whole apparatus is powered by two 18650 li-ion cells that I salvaged from an old laptop battery. A shot of the whole setup without the attachments to the shirt is shown below: To attach them to the shirt, I hot glued pockets on the inside of the shirt for the peltier modules and vibration motors. I also cut little holes for the wires to exit. I added straps to hold the Arduino on and a pocket to hold the batteries on the back of the shirt. When finally put together the shirt looks like this: Okay so for control and power here is what I am looking at: For the vibration motors they will take between 2 and 5V with higher voltages causing more vibration. About 3 to 4 volts will work for my purposes. This will draw about 100 mA. For the Peltier coolers, from my experimentation it seems like 5V @ 1.5 A will be sufficient. However, they will need to be controlled so that they provide steady cooling and then heating when needed for alerts. I am thinking it will not be as simple as turning it on and off or even using PWM. Based on the discussion hereit seems like I don't want to have it turn on and off using a thermostat like device because that is inefficient and causes thermal cycling. They also say that PWM signals will make it work less efficiently. For the first version of control, I am thinking of using my Arduino along with the adafruit motorshield to control the peltier coolers. I will add in some capacitors to build a simple LRC circult to smooth out the PWM signals. As for the on/off vibration motor control, I'll just use some digital pins attached to transistors for switching. So now that I have all the parts, I have begun to experiment with them to see how they work by themselves. First I am going to have a look at the Peltier module. Peltier modules use the thermoelectric effect to produce a temperature gradient when given an applied voltage and can also produce a voltage when given a temperature gradient. For more info on how they work and a description of what their naming convention is, you can read this. My module is a TEC1-12705 which is 40mm x 40mm. I first tried powering it with a Li-Ion cell, giving it about 4V. Later I tried using a 5V power supply that I have. All of the following results are with a 5V power source and drawing about 1 to 2 amps. For more details see the datasheet. When holding the peltier, you can feel one side getting hot and the other side getting cold just few seconds after turning it on. However to really make it work, you need to install a heatsink so that the hot side doesn't become really hot and let heat back into the cold side. For my first heat sink I used a giant aluminum piece of scrap that we have sitting around the lab. I originally just placed the module on the aluminum, but this didn't work very well. Then I applied a small amount of thermal paste and that didn't work much better. I applied a generous portion to coat the whole hot side of the module and that worked really really well. Here you can see that the hot side is 29 C and the cold side is -4.5 C. The cold side is below freezing while the hot side is only slightly warm! The aluminum block is doing a great job pulling away all the heat. I also realized that you can just reverse the voltage to cause the peltier to switch which side is hot and which is cold. This is really great for my application and it means that I don't need to build in a separate heater (the conductive fabric) anymore. I can use the same peltier modules to produce hot and cold. Starting out I have a basic concept and the major components out and ready to play. Here I'll describe the core components that I am using. Moving forward, I am thinking about what electronics will be needed to control each output so that they can reliably send a signal to the user. I got this from Adafruit, you can find the datasheet here. This fabric is conductive to electricity and can be used for heating. I also purchased this from Adafruit, I am not 100% sure about what the model number is or the specs on these motors. You could get a controller for these and make them do all sorts of things. At this point, I will start off with just turning them on and off. The product page says they take from 2 to 5V. Increasing voltage causing greater vibration. I tested them out with about 4V and found the vibration to be satisfactory. These devices are much more affordable on ebay and even cheaper if you can wait for them to be shipped from Asia. There are a couple of different versions on ebay with slightly different dimensions. The datasheet can be found online. For an initial test, I powered it with 4V and noticed that one side got noticeably cool and the other noticeably warm. These modules should be good for up to 14 V (room temperature) or 16V (when 50C hot). I haven't tried them to extensively yet, because I have not yet set up a heatsink for the hot side.
https://hackaday.io/project/160443-tactile-tactical-interface
CC-MAIN-2020-40
refinedweb
1,534
71.34
All: I've been going over the new consumer APIs and it seems like we're squishing a lot of different concerns together into a single class. The scope of the new Consumer is kind of all over the place. Managing the lifecycle - and especially the thread safety - seems challenging. Specifically, Consumer seems to serve the following purposes: * Acts as a holder of subscription info (e.g. subscribe()). * Acts as a stream (e.g. poll(), seek()). I definitely think we want these to be separate. It's pretty common to have a consumer process that connects to the broker, creates N consumer threads, each of which working on a single stream (which could be composed of some number of partitions). In this scenario, you *really* want to explicitly control durability (e.g. commit()s) on a per-stream basis. You also have different lifecycle semantics and thread safety concerns at the stream level versus the global level. Is there a reason the API doesn't look more like: // Thread safe, owns the multiplexed connection Consumer: def subscribe(topic: String, streams: Int): Set[Stream] def close() // Release everything // Not at all thread safe, no synchronization. Stream: def commit() // Really important this be here and not on Consumer. def seek(...) def poll(duration: Long, unit: TimeUnit): List[MessageThingie] def close() // Release these partitions ... I think this also significantly reduces the complexity of the Consumer API and lets each thread in a consumer process handle stream lifecycle appropriately. Since the connection is multiplexed and things could get rebalanced, just toss an exception if the streams become invalid, forcing a resubscribe. That way we don't have crazy state logic. I'm sure I'm missing something, but I wanted to toss this out there for folks to poke at. (p.s. I *really* want per-stream commit baked into the API.) -- E. Sammer CTO - ScalingData
http://mail-archives.us.apache.org/mod_mbox/kafka-users/201405.mbox/%3CCAB0ngGnHOyFbNFDYObpJAM6_4SLKRH-MPNPj-e5opBPJ8FB81g@mail.gmail.com%3E
CC-MAIN-2020-29
refinedweb
312
64.2
Download presentation Presentation is loading. Please wait. Published byDorian Springer Modified about 1 year ago 1 SUMMARY: abstract classes and interfaces 1 Make a class abstract so instances of it cannot be created. Make a method abstract so it must be overridden. An interface is like an abstract class whose methods are all abstract and whose fields are all public constants. This allows multiple inheritance without ambiguity. An interface has a different syntax and a different way of using it. References to text and to JavaSummary.pptx Abstract class: C.27, slides Abstract method: C.27, slide 44 Interface declaration: D.11-D.13, D.28, slide 60 Implementing interfaces: D.14-D.15, slide 60 Casting with interfaces: none, slide 61 Interface Comparable: D.20, slide 62 2 abstract classes and interfaces 2 Circle Shape Object fields for (x, y) coords Rect Shape Object Teach using the problem of using objects to represent shapes in the plane fields for length, width fields for (x, y) coords field for radius Every shape has a position (x, y) in the plane, so use a superclass Shape to hold the point. Subclass has necessary fields to describe a shape. Circle Shape Object fields for (x, y) coords field for radius 3 Every subclass has a different area() function 3 We are dealing with shapes that have areas: Circles, Rectangles, Triangles, Polyhedrons, Squares, etc. … area() … Circle Shape …Object … area() … Rect Shape …Object … area() … Rect Shape …Object … area() … Rect Shape …Object Therefore, each subclass has a (different) function area(), which returns its area. 4 Making our points with scaled-down classes 4 … area() … Circle Shape …Object … area() … Rect Shape …Object public class Shape { } public class Circle extends Shape { public double area() { return 1; } public class Rect extends Shape { public double area() { return 1; } 5 Motivating abstract classes 5 Shape[] … area() … Circle Shape …Object b …Shape …Object … area() … Rect Shape …Object … area() … Rect Shape …Object b[1].area() is illegal, even though each Subclass object has function area() Don’t want to cast down! Instead, define area() in Shape Cast? if (b[1] instanceof Rect) r= ((Rect)b[1]).area(); 6 Motivating abstract classes 6 Shape[] area() in class Shape doesn’t return useful value … area() … Circle Shape …Object b … area() … Rect Shape …Object … area() … Trian Shape …Object area() … … … area() … Trian Shape …Object public double area() { return 0.0; } Problem: How to force subclasses to override area? Problem: How to ban creation of Shape objects area() 7 Abstract class and method solves both problems 7 public abstract class Shape { public abstract double area(); … } Abstract class. Means can’t create object of Shape: new Shape(…) syntactically illegal Abstract method. Means it must be overridden in any subclass Place abstract method only in abstract class. Body is replaced by ; 8 8 Can extend only one class public class C extends C1, C2 { public void p() { …; h= m(); … } public class C1 { public int m() { return 2; } … } public class C2 { public int m() { return 3; } … } if we allowed multiple inheritance, which m used? About interfaces 9 9 Can extend only one class public class C extends C1, C2 { … } public abstract class C1 { public abstract int m(); public int p() {…} } public abstract class C2 { public abstract int m(); public int q(){…} } Use abstract classes? Seems OK, because method bodies not given! But Java does not allow this, because abstract classes can have non-abstract methods Instead, Java has a construct, the interface, which is like an abstract class but has more restrictions. 10 10 Interfaces An interface is a fully abstract class with a slightly different syntax. An interface can contain type signatures for methods, just like abstract methods in abstract classes, but they have to be public. An interface can contain fields, but they have to be public, static, and final and they have to contain an initializer. So they are really just constants 11 11 Interface declaration and use of an interface public class C implements C1, C2 { … } public interface C1 { int m(); int p(); int FF= 32; } public interface C2 { int m(); int q(); } Methods declared in interface are automatically public, abstract Use of public, abstract is optional Use ; not { … } Field declared in interface automatically public, static, final Must have initialization Use of public, static, final optional Eclipse: Create new interface? Create new class, change keyword class to interface C must override all methods in C1 and C2 12 Casting with interfaces 12 class B extends A implements C1, C2 { … } interface C1 { … } interface C2 { … } class A { … } b= new B(); What does object b look like? A Object B Draw b like this, showing only names of partitions: Add C1, C2 as new dimensions: C2 C1 Object b has 5 perspectives. Can cast b to any one of them at any time. Examples: (C2) b (Object) b (A)(C2) b (C1) (C2) b You’ll see such casting later 13 Same rules apply to classes and interface 13 class B extends A implements C1, C2 { … } interface C1 { … } interface C2 { … } class A { … } B b= new B(); C2 c= b; A Object B C2 C1 c C2 b B c.m(…) syntactically legal only if m declared in C2 c.m(…) calls overriding m declared in B 14 14 Shape[] Want to sort b by shape areas. Don’t want to write a sort procedure —many already exist. Avoid duplication of effort! … area() … Circle Shape …Object b … area() … Rect Shape …Object … area() … Trian Shape …Object area() … … … area() … Trian Shape …Object area() b could be sorted on many things: area distance from (0,0) x-coordinate … Sort array of Shapes 15 15 Shape[] Want to sort b by shape areas. Don’t want to write a sort procedure —many already exist. Avoid duplication of effort! … area() … Circle Shape …Object b … area() … Rect Shape …Object … area() … Trian Shape …Object area() … … … area() … Trian Shape …Object area() Sort array of Shapes Solution: Write a function compareTo that tells whether one shape has bigger area than another. Tell sort procedure to use it. 16 16 Look at: interface java.lang.Comparable /** Comparable requires method compareTo */ public interface Comparable { /** = a negative integer if this object < c, = 0 if this object = c, = a positive integer if this object > c. Throw a ClassCastException if c cannot be cast to the class of this object. */ int compareTo(Object c); } Classes that implement Comparable: Boolean Byte Double Integer … String BigDecimal BigInteger Calendar Time Timestamp … In class java.util.Arrays: public static void sort (Comparable[] a) {…} 17 17 17 Which class should implement Comparable? Shape Object Circle Comparable First idea: all the subclasses Circle, Rect, … Doesn’t work! Each element of b has static type Shape, and compareTo isn’t available in Shape partition Shape[] b= … … Shape Object Circle Comparable Use this. Shape must implement Comparable Shape[] b … 18 18 18 Shape should implement Comparable Shape[] b= … … Arrays.sort(b); Shape Object Circle Rect … Triangle Comparable … In class java.util.Arrays: public static void sort (Comparable[] a) {…} b a ?? Comparable[] Shape[] Cast from Shape[] to Comparable[] happens automatically 19 19 public abstract class Shape implements Comparable { /** If c is not a Shape, throw a CastClass exception. Otherwise, return neg number, 0, or pos number depending on whether this shape has smaller area than c, same area, or greater area */ int compareTo(Object c) { return area() – ((Shape) c).area(); } … Cast needed so that area() can be used. If c not a Shape, exception thrown Class Shape implements Comparable We take advantage of the fact that we don’t have to return -1, 0, or 1! Simpler code 20 20 Beauty of interfaces: Arrays.sort sorts an array or list C[] for any class C, as long as C implements interface Comparable —and thus implements compareTo to say which of two elements is bigger. Java Library static methods: Arrays.sort(Comparable[] a) Class Arrays has many other useful static methods Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3377531/
CC-MAIN-2017-04
refinedweb
1,313
59.84
ThanksThanks using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using StudentDB; public partial class Default3 : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { DataClassesDataContext db = new DataClassesDataContext(); Student st = new Student(); st.StudentID = "1111"; st.Studentname = "Test1"; db.Students.InsertOnSubmit(st); Response.Write("Working1!!" + "<br>"); Course co = new Course(); co.CourseID = "1231"; co.Course_Name = "Physics1"; db.Courses.InsertOnSubmit(co); Response.Write("Working2!!" + "<br>"); db.SubmitChanges(); CourseStudent ct = new CourseStudent(); ct.CourseID = "1231"; ct.StudentID = "1111"; //db.CourseStudents.InsertOnSubmit(ct); db.ExecuteCommand("Insert into CourseStudent(CourseID,StudentID) values({0},{1})", ct.CourseID, ct.StudentID); Response.Write("Working3!!" + "<br>"); Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. From novice to tech pro — start learning today. Entity Framework and Linq To Sql requires a primary key as identifier for the record. Thats why your insert isn't working. You have to take a look at your database model. For example: a student can only have 1 of thesame course? e.g.: 12314 - 1111 12314 - 1112 But not 12314 - 1111 12314 - 1111 Then you can change CourseId and StudentId both as primary key and keep the foreign key constraint. Else you need to give it its own identifier and set that one as primary key. Open in new window And why do you want lamda expressions and where? Because I don't see anything that should be converted to LINQ expressions. Open in new windowI need example how to do above code using Lamda LINQ lambda queries or LINQ are mostly done when querying through datasources. In this case when you query to the database it is converted to SQL code. The thing you are doing is just insert, insert and insert, there is no select or something. And why are doing the following line before the related table? Open in new window Does it also break when you do enable the line: Open in new window and move the following line below it: Open in new window Something like this: Open in new window Can't perform Create, Update or Delete operations on 'Table(CourseStudent)' because it has no primary key. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationExc Source Error: Line 27: ct.CourseID = "12314"; Line 28: ct.StudentID = "1111"; Line 29: db.CourseStudents.InsertOn For more explain CoursID and is Foreign key in the Course Table and StudentID is Foreign Key in the Student Table. Please let me know why i'm getting above error and how to resolve it.
https://www.experts-exchange.com/questions/28242415/ASP-net-C-LINQ.html
CC-MAIN-2018-13
refinedweb
470
52.97
Red Hat Bugzilla – Bug 32056 g++ does not detect non-integral in-class initializer Last modified: 2007-04-18 12:32:13 EDT From Bugzilla Helper: User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.2.16-22smp i686) g++ fails to detect an illegal attempt to initialize a non-integral static const member of a class inside the class declaration. Reproducible: Always Steps to Reproduce: The following piece of code should generate an error: #include <stdio.h> class A { public: const static int N = 10; // OK const static float EPS = 0.0001; // illegal }; const int A::N; const float A::EPS; int main() { const int *pi = &A::N; printf("%d %d\n", A::N, *pi); const float *pf = &A::EPS; printf("%g %g\n", A::EPS, *pf); return 0; } Actual Results: Currently, g++ (gcc-2.96-69) will accept that piece of code, and produce an executable. Expected Results: g++ should abort compilation with an error. See page 249 of Bjarne Stroustrup, "The C++ Programming Language", Special Edition, 2000. No, it does detect it, but handles this as an extension to the standard. If you want a warning, simply run g++ with -pedantic, if you want this to be an error, run with -pedantic-errors: g++ -pedantic-errors test.C test.C:6: ISO C++ forbids initialization of member constant `EPS' of non-integral type `const float'
https://bugzilla.redhat.com/show_bug.cgi?id=32056
CC-MAIN-2018-34
refinedweb
229
63.39
This Article covers the use of the Menu Button in Tkinter Python. What is the Python Tkinter Menu Button? A combination of both the button and menu widget, the Python Tkinter Menu button widget displays a drop down menu with a list of options once clicked. Otherwise it will only be showing one option. If you haven’t been through our Tkinter Menu Article, head over there first and read more about menu’s. A menu button is just a button to which a menu is attached. Refer to the article to understand how that menu is made. Menu Button Syntax: MenuBttn = Menubutton(master, option..... ) Menu Buttons Options A complete list of options for the Tkinter Menu Button. Note however, that some will only work with the newer “ttk” version. A short segment on this is included at the end of the tutorial. Menu Button Example 1. This example explores the use of a Menu Button using a menu with Check Buttons. Remember that the menu’s master option should always be the name of the Menu Button to which it is to be assigned. And you have to assign a menu to a Menu Button as shown in the example for it to actually display. from tkinter import * root = Tk() root.geometry("200x150") frame = Frame(root) frame.pack() MenuBttn = Menubutton(frame, text = "Favourite food", relief = RAISED) Var1 = IntVar() Var2 = IntVar() Var3 = IntVar() Menu1 = Menu(MenuBttn, tearoff = 0) Menu1.add_checkbutton(label = "Pizza", variable = Var1) Menu1.add_checkbutton(label = "Cheese Burger", variable = Var2) Menu1.add_checkbutton(label = "Salad", variable = Var3) MenuBttn["menu"] = Menu1 MenuBttn.pack() root.mainloop() Menu Button Example 2. This example is the same as the above one, except this time we’re using Radio buttons instead of check buttons. As expected, this causes only one option to be selected at a time. from tkinter import * root = Tk() root.geometry("200x150") frame = Frame(root) frame.pack() MenuBttn = Menubutton(frame, text = "Favourite food", relief = RAISED) Var1 = IntVar() Menu1 = Menu(MenuBttn, tearoff = 0) Menu1.add_radiobutton(label = "Pizza", variable = Var1, value = 1) Menu1.add_radiobutton(label = "Cheese Burger", variable = Var1, value = 2) Menu1.add_radiobutton(label = "Salad", variable = Var1, value = 3) MenuBttn["menu"] = Menu1 MenuBttn.pack() root.mainloop() TTK MenuButton The newer ttk library brings in some updated widgets which generally perform better than the originals. They offer more flexibility and have more options. Some of the above mentioned options only work with the ttk version of the Menu Button. In the below section, we demonstrate how to do this, and we also use an option (textvariable) which is not available in the tkinter menu button. from tkinter import * import tkinter from ttk import Menubutton def f(): var.set("Food") root = Tk() var = StringVar() mb = Menubutton (root, textvariable = var) mb.pack() mb.menu = Menu ( mb, tearoff = 0 ) mb["menu"] = mb.menu b = Button(root, text = "Click", command = f) b.pack() mayoVar = IntVar() ketchVar = IntVar() mb.menu.add_checkbutton ( label="mayo", variable=mayoVar ) mb.menu.add_checkbutton ( label="ketchup", variable=ketchVar ) mb.pack() root.mainloop() This marks the end of our Python Tkinter Menu Button article. You can head back to the main Tkinter article using this link. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article tutorial content can be asked in the comments section below. Dude, a lot of the parameters listed on this tutorial do not actually work when you write them into the code. For example, textvariable can’t go inside an .add(type, option) to a menu object, and neither can it go into the Menu() widget creation. This isn’t the only tutorial on this website with these mistakes! I have to try out every single parameter and half of them don’t work. Please clean this up! I’m glad to see your feedback. I actually tried it out myself, and realized the issue. You need to use the newer ttk MenuButton for these newer features. I will now include an example in this article. I’ll also try and work out an article which is dedicated to explaining ttk and it’s uses.
https://coderslegacy.com/python/python-gui/python-tkinter-menu-button/
CC-MAIN-2021-21
refinedweb
677
60.31
If you have a good understanding of HTML, CSS, JavaScript, and React you might be wondering where to go next on your learning journey. So why not check out Scrimba's brand new, free tutorial on how to build the classic tic-tac-toe game in React? Click above to go to the tutorial. The example game is pulled from React's official tutorials but is brought up to date using React hooks - the latest hot topic in the React world. This article gives you an overview of the tutorial and lets you click through to the screencasts and play with the code at any time. If your HTML, CSS, JavaScript or React skills feel shaky, never fear - Scrimba offers a huge range of tutorials to bring you up to speed. We recommend the following courses to get you ready for the tic-tac-toe tutorial: - HTML & CSS Crash Course with Kevin Powell - Introduction to Javascript with Dylan C. Israel - Learn React for Free with Bob Ziroll In true Scrimba style, the Build Tic-Tac-Toe with React Hooks tutorial contains loads of interactive challenges along the way, so you'll embed your learning and feel like a hooks wizard by the end of it. The course is led by Thomas Weibenfalk, a passionate developer, designer, and coding instructor from Sweden. Thomas loves teaching people about front-end development, especially React, making him the ideal teacher to take you through this learning experience. Assuming you're ready to go with tic-tac-toe, let's get started! Introduction In the first scrim, Thomas tells us about his plans for the tutorial and shares the Official React Documentation, which we can refer to for a more detailed explanation of the features used to build the game. As well as talking us through the file structure he has implemented for the app, Thomas also gives us our first glimpse of the finished product. Click the image above to visit the cast. Scaffolding Components Next up, Thomas talks us through setting up the Board.js, Game.js and Square.js files needed to create the game. We also see how to import Game.js into the App.js file. import React from "react"; import Game from "./components/Game"; const App = () => <Game />; export default App; Square Component and Destructuring Props In the next scrim, we create our square component using JSX to add a button: const Square = (props) => ( <button onClick={props.onClick}>{props.value}</button> ); For our first challenge, Thomas encourages us to destructure out the props in our component. Click through to the cast to give the challenge a try. Board Component and Scaffolding Functions Now it's time to create the board component by importing the square component and adding nine instances of it to the board (note that we will refactor this with a loop to improve the code later): <div> <Square value="1" onClick={() => onClick("dummy value")} /> <Square value="2" onClick={() => onClick("dummy value")} /> <Square value="3" onClick={() => onClick("dummy value")} /> <Square value="4" onClick={() => onClick("dummy value")} /> <Square value="5" onClick={() => onClick("dummy value")} /> <Square value="6" onClick={() => onClick("dummy value")} /> <Square value="7" onClick={() => onClick("dummy value")} /> <Square value="8" onClick={() => onClick("dummy value")} /> <Square value="9" onClick={() => onClick("dummy value")} /> </div> Thomas also scaffolds out the functions we need in Game.js, which we'll complete later. Square Styling Click the image to access the cast. Next up, we style our squares using the style default prop: const style = { background: "lightblue", border: "2px solid darkblue", fontSize: "30px", fontWeight: "800", cursor: "pointer", outline: "none", }; const Square = ({ value, onClick }) => ( <button style={style} onClick={onClick}> {value} </button> ); Board Styling Click the image to access the scrim. Now that our squares are ready, it's time to the style the board. Thomas kicks us off by once again creating a style object, this time with CSS grid: const style = { border: "4px solid darkblue", borderRadius: "10px", width: "250px", height: "250px", margin: "0 auto", display: "grid", gridTemplate: "repeat(3, 1fr) / repeat(3, 1fr)", }; Our challenge now is to apply the style object to the board. Head over to the scrim to give it a try. Don't forget, while Thomas has provided some great styling options, Scrimba is fully interactive, so you are free to customize your game however you prefer - let your imagination run wild! The calculateWinner Function Explained export; } Next up, Thomas explains the all-important calculateWinner() function, which is taken from the original tutorial from reactjs.org. Head over to the cast to explore the function and hear how it works. Create States and Fill with Initial Data In the next scrim, we start creating the logic for the game. We begin by adding a hook called usedState in Game.js and creating states to set an empty board and specify the next player. Lastly, we add const winner, which tells us whether the latest move was a winning one: const [board, setBoard] = useState(Array(9).fill(null)); const [xIsNext, setXisNext] = useState(true); const winner = calculateWinner(board); Over in Board.js, we delete our manually-rendered squares and replace them with mapped squares as promised earlier. Click through to see this in detail: const Board = ({ squares, onClick }) => ( <div style={style}> {squares.map((square, i) => ( <Square key={i} value={square} onClick={() => onClick(i)} /> ))} </div> ); Create the handleClick Function Next, we create the handleClick() function, which carries out the calculations when we make a move: const handleClick = (i) => { const boardCopy = [...board]; // If user click an occupied square or if game is won, return if (winner || boardCopy[i]) return; // Put an X or an O in the clicked square boardCopy[i] = xIsNext ? "X" : "O"; setBoard(boardCopy); setXisNext(!xIsNext); }; renderMoves Function and the Last JSX Click above to go to the tutorial. In this scrim, we create the JSX which makes the game playable. <> <Board squares={board} onClick={handleClick} /> <div style={styles}> <p> {winner ? "Winner: " + winner : "Next Player: " + (xIsNext ? "X" : "O")} </p> </div> </> We now have the all knowledge needed to create a fully working tic-tac-toe game with React hooks! BONUS: Implement Time Travel Click the image to go to the bonus tutorial. In the bonus scrim, we take our game to the next level by implementing time travel to review the moves made throughout the game. Click through to get the goodies in this bonus tutorial. So there we have it - a fully working tic-tac-toe game using React hooks! I hope you found this tutorial helpful. Don't forget, you can refer back to it at any time to refresh your memory of the topics covered or play around with the code in the interactive screencasts. Next up, why not check out some of many other tutorials available on Scrimba? With a huge range of topics, there is something for everyone. Happy learning :)
https://www.freecodecamp.org/news/learn-how-to-build-tic-tac-toe-with-react-hooks/
CC-MAIN-2021-25
refinedweb
1,134
60.85
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources. So far you've learned how to read the filesystem of a mobile device using AIR. In this section you take a look at modifying the filesystem objects. To create files and folders on the mobile device, you need to use a combination of the File, FileStream, and FileMode classes. The FileMode class is found in the flash.filesystem package. When creating ActionScript Mobile projects, you need to import the class through the following statement: import flash.filesystem.FileMode; When creating a Flex Mobile project in Flash Builder, you don't need to import the class. The FileMode class provides four static constants. These are flags to define what a FileStream object should do with a File object it receives via the FileStream.open() method. At least one of these properties needs to be supplied as the second parameter in the open() method:
http://my.safaribooksonline.com/book/-/9781118193341/chapter-7-working-with-the-filesystem/modifying_files_and_filesystem
CC-MAIN-2013-20
refinedweb
160
64.41
SabreDAV is an open source software that allows you to easily add WebDAV support to a PHP application.It is designed to cover the entire standard, and attempts to allow integration using an easy to understand API.Product's homepage Here are some key features of "SabreDAV": · Fully WebDAV compliant · Supports Windows XP, Windows Vista, Mac OS/X, DavFSv2, Cadaver, Netdrive, Open Office, and probably more. The zip release ships with sabre/vobject 2.0.5. · The entire codebase has been moved to PHP namespaces. · BC Break: every backend package (CalDAV, CardDAV, Auth, Locks, Principals) now has consistent naming conventions. · There's a BackendInterface and an AbstractBackend class. · BC Break: several constructor signatures have been changed in the CalDAV package to reduce dependencies on the ACL package. · BC Break: Sabre_CalDAV_ISharedCalendar now also has a getShares method, so sharees can figure out who is also on a shared calendar.
http://linux.softpedia.com/get/Internet/HTTP-WWW-/SabreDAV-52360.shtml
CC-MAIN-2013-20
refinedweb
148
55.64
--- On Fri, 11/4/11, Ariel Constenla-Haile wrote: ... > /usr/ports/editors/openoffice.org-3-devel/work/ooo/main/unoxml/source/dom/node.cxx:28: > > /usr/local/include/iconv.h:114: error: 'mbstate_t' > does not name a type > > ... > > _____ > > > > but I already know how to fix those :). > > how are you fixing them? Looks like a system header issue, > not OOo problem... > The problem is only caused when the libxml/tree.h is included, so this seems to work: +#include <cwchar> // for std::mbstate_t #include <libxml/tree.h> What is really weird is that this is specific to OpenOffice, it doesn't happen in other ports, including LibreOffice. Pedro. > Regards > -- > Ariel Constenla-Haile > La Plata, Argentina >
http://mail-archives.apache.org/mod_mbox/incubator-ooo-dev/201111.mbox/%3C1320461813.8817.YahooMailClassic@web113507.mail.gq1.yahoo.com%3E
CC-MAIN-2014-15
refinedweb
115
53.47
ARCore. This codelab guides you through building a simple demo game to introduce these capabilities so you can use them in your own applications. What you'll learn - Prerequisites Make sure you have these before starting the codelab: Unity Game Engine - Recommended version: Unity 2017.4 LTS or later - Minimum version: 2017.3.0f2 - JDK 8 (JDK 9 is currently not supported by Unity, use JDK 8 instead) ARCore SDK for Unity - Recommended version: v1.5.0 or later ( arcore-unity-sdk-v1.5.0.unitypackage) - Minimum version: v1.0.0 Other items - Sample assets for the project (arcore-intro.unitypackage, hosted on GitHub) - ARCore supported device and USB cable More information about getting started can be found at: developers.google.com/ar/develop/unity/getting-started Now that you have everything you need, let's start! Create a new Unity 3D project and change the target platform to Android (under File > Build Settings). Select Android and click Switch Platform. Then click Player Settings... to configure the Android specific player settings. Change player settings for ARCore Add the ARCore SDK Use Assets > Import package > Custom package to import arcore-unity-sdk-v1.5.0.unitypackage which you downloaded in the Prerequisites section into your project. Add the codelab assets Import arcore-intro.unitypackage into your project. (If you haven't already done so, check the Overview step for a list of prerequisites you need to download). This contains prefabs and scripts that will expedite the parts of the codelab so you can focus on how to use ARCore. Add the required scene elements Now you have a scene setup for using ARCore. Next, let's add some code! The scene controller is used to coordinate between ARCore and Unity. Create an empty game object and change the name to SceneController. Add a C# script component to the object also named SceneController. Add ARCore operational checks The default configuration for ARCore applications is AR Required. This means to run this application, the user's device must support ARCore and ARCore services must be installed. The "ARCore Device" prefab handles the AR Required checks and will start the process of downloading ARCore services automatically. If you want to make an AR Optional application, you can see more information on the Google Developer website. Open the script. We need to check for a variety of error conditions. These conditions are also checked by the HelloARExample controller sample script in the SDK. First add the using statement to resolve the class name from the ARCore SDK. This will make auto-complete recognize the ARCore classes and methods used. SceneController.cs using GoogleARCore; Create a new method void QuitOnConnectionErrors() {} and add it to the the SceneController script. This method checks the state of the ARCore Session to make sure ARCore is working in our app: - Is the permission to use the camera granted? ARCore uses the camera to sense the real world. The user is prompted to grant this permission the first time the application is run. This check is done by ARCore, so you don't have to write any code to check the permission yourself. - Can the ARCore library connect to the ARCore Services? ARCore relies on AR Services which runs on the device in a separate process. SceneController.cs void QuitOnConnectionErrors() { if (Session.Status == SessionStatus.ErrorPermissionNotGranted) { StartCoroutine(CodelabUtils.ToastAndExit( "Camera permission is needed to run this application.", 5)); } else if (Session.Status.IsError()) { // This covers a variety of errors. See reference for details // StartCoroutine(CodelabUtils.ToastAndExit( "ARCore encountered a problem connecting. Please restart the app.", 5)); } } Now add a call to QuitOnConnectionErrors() in the Start() method. SceneController.cs void Start () { QuitOnConnectionErrors (); } Check the ARCore tracking state ARCore needs to capture and process enough information to start tracking the user's movements in the real world. Once ARCore is tracking, the Frame object is used to interact with ARCore. Add this check to the Update() method. At the same time, adjust the screen timeout so it stays on if we are tracking. SceneController.cs void Update() { // The session status must be Tracking in order to access the Frame. if (Session.Status != SessionStatus.Tracking) { int lostTrackingSleepTimeout = 15; Screen.sleepTimeout = lostTrackingSleepTimeout; return; } Screen.sleepTimeout = SleepTimeout.NeverSleep; } Great! Now we have the minimum amount of code to start using ARCore and make sure it works. Next, let's try it out! Save the scene with the name "ARScene" and add it to the list of scenes when building. Build and run the sample app. If everything is working, you should be prompted for permission to take pictures and record video, after which you'll start seeing a preview of the camera image. Once you see the preview image, you're ready to use ARCore! If there is an error, you'll want to resolve it before continuing with the codelab. Unity's scaling system is designed so that when working with the Physics engine, 1 unit of distance can be thought of as 1 meter in the real world. ARCore is designed with this assumption in mind. We use this scaling system to scale virtual objects so they look reasonable in the real world. For example, an object placed on a desktop should be small enough to be on the desktop. A reasonable starting point would be ½ foot (15.24 cm), so the scale should (0.1524, 0.1524, 0.1524). This might not look the best in your application, but it tends to be a good starting point and then you can fine tune the scale further for your specific scene. As a convenience, the prefabs used in this codelab contain a component named GlobalScalable which supports using stretch and pinch to size the objects. To enable this, the touch input needs to be captured. Add GlobalScalable support to the Scene Controller Select the SceneController object in the hierarchy and add the Script component GlobalScalable. In the properties for the component, enable "Handle Scale Input" and disable "Adjust Scale." The scale of the scene controller should not be adjusted since it is the parent of the ARCore detected planes. Now when running the application, the user can pinch or stretch the objects to fit the scene more appropriately. Next, let's detect and display the planes that are detected by ARCore. ARCore uses a class named DetactedPlane to represent detected planes. This class is not a game object, so we need to make a prefab that will render the detected planes. Good news since ARCore 1.2 there's already such a prefab in the ARCore Unity SDK, it is Assets/GoogleARCore/Examples/Common/Prefabs/DetectedPlaneVisualizer. ARCore detects horizontal and vertical planes. We'll use these planes in the game. For each newly detected plane, we'll create a game object that renders the plane using the DetectedPlaneVisualizer prefab. You may have guessed it, since ARCore 1.2 there's a convenient script in ARCore Unity SDK, Assets/GoogleArCore/Examples/Common/Scripts/DetectedPlaneGenerator.cs that just does this. Adding plane generator and plane visualizer Let's add the component DetectedPlaneGenerator onto SceneController object. Select SceneController object, in property Inspector, click Add Component button, and type-in DetectedPlaneGenerator. Then set the value of Detected Plane Prefab to the prefab Assets/GoogleARCore/Examples/Common/Prefabs/DetectedPlaneVisualizer. Save and Run Now save the scene & project and run the app. As you look at the room, ARCore will detect planes and they should appear as different color grids. As ARCore detects more about the scene, the plane will change shape and merge with, or subsume other planes. Depending on the physical characteristics of the environment, it might take a couple seconds for the first plane to be detected. Once a plane is detected, it will be rendered using a random color. Plane detection can be improved with good lighting and some sort of pattern or texture (like wood grain, or a rug design). In the SceneController script, add a member variable for the first person camera. We'll be using the first person camera in this method, so add a public variable and set it to the first person camera. SceneController.cs public Camera firstPersonCamera; Save the script, switch to the scene editor, and set this property to ARCore Device/First Person Camera from the scene hierarchy. To process the touches, we get a single touch and raycast it using the ARCore session to check if the user tapped on a plane. If so, we'll use that one to display the rest of the objects. In SceneController script, create a new method named ProcessTouches(). This method will perform the ray casting hit test and select the plane that is tapped. SceneController.cs void ProcessTouches () { Touch touch; if (Input.touchCount != 1 || (touch = Input.GetTouch (0)).phase != TouchPhase.Began) { return; } TrackableHit hit; TrackableHitFlags raycastFilter = TrackableHitFlags.PlaneWithinBounds | TrackableHitFlags.PlaneWithinPolygon; if (Frame.Raycast (touch.position.x, touch.position.y, raycastFilter, out hit)) { SetSelectedPlane (hit.Trackable as DetectedPlane); } } Still in SceneController script, create the new method SetSelectedPlane(). This is used to notify all the other controllers that a new plane has been selected. Right now it just logs we selected a plane. SceneController.cs void SetSelectedPlane (DetectedPlane selectedPlane) { Debug.Log ("Selected plane centered at " + selectedPlane.CenterPose.position); } The last step is to call ProcessTouches() from Update. Add this code to the end of the Update() method: SceneController.cs // Add to the end of Update() ProcessTouches(); In ARCore, objects that maintain a constant position as you move around are positioned by using an Anchor. Let's create an Anchor to hold a floating scoreboard. Create the Game Object - Add an empty game object named Scoreboard. - Set the scale of the Scoreboard object to (0.3, 0.3, 0.3). This creates a small scoreboard measuring approximately 1 foot (30 cm). To the Scoreboard object: - add the Assets/Codelab/Prefabs/ScoreboardDisplay prefab as child. - Also add a new script component called ScoreboardController. Write the Scoreboard controller script In order to position the scoreboard, we'll need to know where the user is looking. So we'll add a public variable for the first person camera. The scoreboard will also be "anchored" to the ARScene. An anchor is an object that holds it position and rotation as ARCore processes the sensor and camera data to build the model of the world. To keep the anchor consistent with the plane, we'll keep track of the plane and make sure the distance in the Y axis is constant. Also add a member to keep track of the score. ScoreboardController.cs public Camera firstPersonCamera; private Anchor anchor; private DetectedPlane detectedPlane; private float yOffset; private int score; Just as in the previous step, save the script, switch to the scene editor, and set "First Person Camera" property to ARCore Device/First Person Camera from the scene hierarchy. We'll place the scoreboard above the selected plane. This way it will be visible and indicate which plane we're focused on. Hide until Anchored We want the scoreboard hidden until it is anchored in position. We'll do this by disabling all the mesh renderers, then once anchored, enable them. In ScoreboardController script, in the Start() method, add the code to disable the mesh renderers. ScoreboardController.cs void Start () { foreach (Renderer r in GetComponentsInChildren<Renderer>()) { r.enabled = false; } } Create the function SetSelectedPlane() This is called from the scene controller when the user taps a plane. When this happens, we'll create the anchor for the scoreboard ScoreboardController.cs // in ScoreboardController.cs public void SetSelectedPlane(DetectedPlane detectedPlane) { this.detectedPlane = detectedPlane; CreateAnchor(); } Create the function CreateAnchor() The CreateAnchor method does 5 things: - Raycast a screen point through the first person camera to find a position to place the scoreboard. - Create an ARCore Anchor at that position. This anchor will move as ARCore builds a model of the real world in order to keep it in the same location relative to the ARCore device. - Attach the scoreboard prefab to the anchor as a child object so it is displayed correctly. - Record the yOffset from the plane. This will be used to keep the score the same height relative to the plane as the plane position is refined. - Enable the renderers so the scoreboard is drawn. ScoreboardController.cs void CreateAnchor() { // Create the position of the anchor by raycasting a point towards // the top of the screen. Vector2 pos = new Vector2 (Screen.width * .5f, Screen.height * .90f); Ray ray = firstPersonCamera.ScreenPointToRay (pos); Vector3 anchorPosition = ray.GetPoint (5f); // Create the anchor at that point. if (anchor != null) { DestroyObject (anchor); } anchor = detectedPlane.CreateAnchor ( new Pose (anchorPosition, Quaternion.identity)); // Attach the scoreboard to the anchor. transform.position = anchorPosition; transform.SetParent (anchor.transform); // Record the y offset from the plane. yOffset = transform.position.y - detectedPlane.CenterPose.position.y; // Finally, enable the renderers. foreach (Renderer r in GetComponentsInChildren<Renderer>()) { r.enabled = true; } } Add code to ScoreboardController.Update() First check for tracking to be active. ScoreboardController.cs // The tracking state must be FrameTrackingState.Tracking // in order to access the Frame. if (Session.Status != SessionStatus.Tracking) { return; } Check that there is a selected plane, and update it if it was subsumed by another plane. ScoreboardController.cs // If there is no plane, then return if (detectedPlane == null) { return; } // Check for the plane being subsumed. // If the plane has been subsumed switch attachment to the subsuming plane. while (detectedPlane.SubsumedBy != null) { detectedPlane = detectedPlane.SubsumedBy; } The last thing to add is to rotate the scoreboard towards the user as they move around in the real world and adjust the offset relative to the plane. ScoreboardController.cs // Make the scoreboard face the viewer. transform.LookAt (firstPersonCamera.transform); // Move the position to stay consistent with the plane. transform.position = new Vector3(transform.position.x, detectedPlane.CenterPose.position.y + yOffset, transform.position.z); Call SetSelectedPlane() from the scene controller Switch back to the SceneController script and add a member variable for the ScoreboardController. SceneController.cs public ScoreboardController scoreboard; Save the script, switch back to the scene editor and set this property to the scoreboard object. Back in SceneController, find the SetSelectedPlane() method we added earlier, and pass the selected plane to the ScoreboardController. SceneController.cs // Add to the end of SetSelectedPlane. scoreboard.SetSelectedPlane(selectedPlane); Save the scripts and the scene. Build and run the app! Now it should display planes as they are detected, and if you tap one, you'll see the scoreboard! Now that we have a plane, let's put a snake on it and move it around on the plane. Create a new Empty Game object named Snake. Add the existing C# script (Assets/Codelab/Scripts/Slithering.cs). This controls the movement of the snake as it grows. In the interest of time, we'll just add it, but feel free to review the code later on. Add a new C# script to the Snake named SnakeController. In SnakeController.cs, we need to track the plane that the snake is traveling on. We'll also add member variables for the prefab for the head, and the instance: SnakeController.cs private DetectedPlane detectedPlane; public GameObject snakeHeadPrefab; private GameObject snakeInstance; Set the prefabs Back in the editor, set the prefab values. For the snakeHeadPrefab, use Assets/Codelab/Prefabs/SnakeHeadPrefab. For the snakeBody in the Slithering component, use Assets/Codelab/Prefabs/SnakeBodyPrefab. Create the SetPlane() method In SnakeController script, add a method to set the plane. When the plane is set, spawn a new snake. SnakeController.cs public void SetPlane (DetectedPlane plane) { detectedPlane = plane; // Spawn a new snake. SpawnSnake(); } Then spawn the snake. SnakeController.cs void SpawnSnake () { if (snakeInstance != null) { DestroyImmediate (snakeInstance); } Vector3 pos = detectedPlane.CenterPose.position; // Not anchored, it is rigidbody that is influenced by the physics engine. snakeInstance = Instantiate (snakeHeadPrefab, pos, Quaternion.identity, transform); // Pass the head to the slithering component to make movement work. GetComponent<Slithering> ().Head = snakeInstance.transform; } Now add a member variable to the SceneController.cs to reference the Snake. SceneController.cs public SnakeController snakeController; Save the script, switch to the scene editor and assign the Snake object to snakeController in the scene inspector. In SceneController.SetSelectedPlane(), pass the selected plane to the snake controller SceneController.cs // Add to SetSelectedPlane() snakeController.SetPlane(selectedPlane); To move the snake, we'll use where we are looking as a point that the snake should move towards. To do this, we'll raycast the center of the screen through the ARCore session to a point on a plane. First let's add a game object that we'll use to visualize where the user is looking at. Edit the SnakeController and add member variables for the pointer and the first person camera. Also add a speed member variable. SnakeController.cs public GameObject pointer; public Camera firstPersonCamera; // Speed to move. public float speed = 20f; Set the game object properties Save the script and switch to the scene editor. Add an instance of the Assets/CodelabPrefabs/gazePointer to the scene. Then, select Snake object, in Inspector view, set the pointer property to the instance of the gazePointer, and the firstPersonCamera to the ARCore device's first person camera. Update the pointer state In SnakeController.Update() add a check for the snake being active. If it is not, just return; there is nothing to do. SnakeController.cs if (snakeInstance == null || snakeInstance.activeSelf == false) { pointer.SetActive(false); return; } else { pointer.SetActive(true); } Raycast the center of the screen Use the ARCore Session to raycast from the center of the screen and if there is a hit, use that point, but average the point on the plane's y position with the snake head's so the pointer is between the plane and the head. Add below code to the end of SnakeController.Update(). SnakeController.cs TrackableHit hit; TrackableHitFlags raycastFilter = TrackableHitFlags.PlaneWithinBounds; if (Frame.Raycast (Screen.width/2, Screen.height/2, raycastFilter, out hit)) { Vector3 pt = hit.Pose.position; //Set the Y to the Y of the snakeInstance pt.y = snakeInstance.transform.position.y; // Set the y position relative to the plane and attach the pointer to the plane Vector3 pos = pointer.transform.position; pos.y = pt.y; pointer.transform.position = pos; // Now lerp to the position pointer.transform.position = Vector3.Lerp (pointer.transform.position, pt, Time.smoothDeltaTime * speed); } Move towards the pointer Once the snake is heading in the right direction, move towards it. We want to stop before the snake is at the same spot to avoid a weird nose spin. Add below code to the end of SnakeController.Update(). SnakeController.cs // Move towards the pointer, slow down if very close. float dist = Vector3.Distance (pointer.transform.position, snakeInstance.transform.position) - 0.05f; if (dist < 0) { dist = 0; } Rigidbody rb = snakeInstance.GetComponent<Rigidbody> (); rb.transform.LookAt (pointer.transform.position); rb.velocity = snakeInstance.transform.localScale.x * snakeInstance.transform.forward * dist / .01f; Save the scripts and the scene. Build and run the app! Tap a plane and the snake will appear and follow the pointer around on the plane as you look around. (Remember, you can stretch & pinch to scale the snake if needed). Now we want to put a tasty bit of food on the plane. This involves creating an object, placing it on a plane, then removing it after some time. Select the SceneController object, and add a new C# script named FoodController and begin editing the script. First, add member variables to reference the plane selected. - The DetectedPlane instance to place the food on. - The instance of the food object. - The age of the object in seconds. - The max age of the food (All food has an expiration date). - An array of prefabs to use to create food instances. FoodController.cs private DetectedPlane detectedPlane; private GameObject foodInstance; private float foodAge; private readonly float maxAge = 10f; public GameObject[] foodModels; Initialize the Food Prefabs First, make sure to initialize the foodModels array in the editor, by adding prefabs in Assets/Codelab/Prefab/Foods. You need to at least add 1, but a varied diet is much more enjoyable! Add the food tag Add a tag in the editor by dropping down the tag selector in the object inspector, and select "Add Tag". Add a tag named "food". We'll use this tag to identify food objects during collision detection. Add the SetSelectedPlane method In FoodController script, create a public method named SetSelectedPlane() that will be called by the SceneController when a plane is selected. FoodController.cs public void SetSelectedPlane(DetectedPlane selectedPlane) { detectedPlane = selectedPlane; } Manage the plane state The plane will change state, size, and position as ARCore interprets the input from the sensors and camera. Since we're holding on to the plane, we need to handle these changes. In the FoodController.Update() method: - Check for a null plane; if we don't have a plane we can't do anything. - Check for a valid detectedPlane. Again, if the plane is not tracking, do nothing. FoodController.cs if (detectedPlane == null) { return; } if (detectedPlane.TrackingState != TrackingState.Tracking) { return; } - Add the check for the plane being subsumed FoodController.cs // Check for the plane being subsumed // If the plane has been subsumed switch attachment to the subsuming plane. while (detectedPlane.SubsumedBy != null) { detectedPlane = detectedPlane.SubsumedBy; } - Check if there is no active food instance, and spawn a new one if needed. FoodController.cs if (foodInstance == null || foodInstance.activeSelf == false) { SpawnFoodInstance(); return; } - Lastly, increase the age of existing food and destroy it if expired: FoodController.cs foodAge += Time.deltaTime; if (foodAge >= maxAge) { DestroyObject(foodInstance); foodInstance = null; } Implement SpawnFoodInstance() Spawning a new food item has several steps: - Selecting a food prefab. - Calculating a random position to spawn the item. - Anchoring it to the ARCore frame. - Attaching a component to make the item rotate. FoodController.cs void SpawnFoodInstance () { GameObject foodItem = foodModels [Random.Range (0, foodModels.Length)]; // Pick a location. This is done by selecting a vertex at random and then // a random point between it and the center of the plane. List<Vector3> vertices = new List<Vector3> (); detectedPlane.GetBoundaryPolygon (vertices); Vector3 pt = vertices [Random.Range (0, vertices.Count)]; float dist = Random.Range (0.05f, 1f); Vector3 position = Vector3.Lerp (pt, detectedPlane.CenterPose.position, dist); // Move the object above the plane. position.y += .05f; Anchor anchor = detectedPlane.CreateAnchor (new Pose (position, Quaternion.identity)); foodInstance = Instantiate (foodItem, position, Quaternion.identity, anchor.transform); // Set the tag. foodInstance.tag = "food"; foodInstance.transform.localScale = new Vector3 (.025f, .025f, .025f); foodInstance.transform.SetParent (anchor.transform); foodAge = 0; foodInstance.AddComponent<FoodMotion> (); } Set the plane from the SceneController Add the call to SetSelectedPlane()in the SceneController.SetSelectedPlane(): SceneController.cs // Add to the bottom of SetSelectedPlane() GetComponent<FoodController>().SetSelectedPlane(selectedPlane); Now we are moving, add the test for colliding with food, eat it and grow the snake. Create a new C# script named FoodConsumer. To do so, right click on Assets in Project window, select Create->C# Script, and rename it to FoodConsumer. We don't want to pollute our awesome snake head prefab with the FoodConsumer, so let's add It to the instance when we spawn. In SnakeController.SpawnSnake(), add the component to the new instance. SnakeController.cs // After instantiating a new snake instance, add the FoodConsumer component. snakeInstance.AddComponent<FoodConsumer>(); In FoodConsumer(), add the OnCollisionEnter method (you can delete the boilerplate Start() and Update() methods). FoodConsumer.cs void OnCollisionEnter(Collision collision) { if (collision.gameObject.tag == "food") { collision.gameObject.SetActive(false); Slithering s = GetComponentInParent<Slithering>(); if (s != null) { s.AddBodyPart(); } } } Remember that Scoreboard from the beginning of the codelab? Well, now it is time to actually use it! In SceneController.Update(), set the score to the length of the snake: SceneController.cs scoreboard.SetScore(snakeController.GetLength()); Now we need to implement SetScore in ScoreboardController: ScoreboardController.cs public void SetScore(int score) { if (this.score != score) { GetComponentInChildren<TextMesh>().text = "Score: " + score; this.score = score; } } Add GetLength() in the SnakeController SnakeController.cs public int GetLength() { return GetComponent<Slithering>().GetLength(); } Well done working through this codelab! A quick re-cap of what was covered: - Other Resources As you continue your ARCore exploration. Check out these other resources: - AR Concepts: - Google Developers ARCore - Github projects for ARCore: - AR experiments for inspiration and to see what could be possible:
https://codelabs.developers.google.com/codelabs/arcore-intro
CC-MAIN-2020-50
refinedweb
3,991
51.14
On 2007-07-03 22:29-0700 Jerry wrote: >. That's fine. Thanks for the information about the necessity of Jul 3, 2007, at 7:51 AM, Alan W. Irwin wrote: > On 2007-07-03 07:11-0700 Alan W. Irwin wrote: > > >>> Added two new files to the bindings directory, >>> plplot_auxiliary.adb and >>> plplot_auxiliary.ads. This was necessary in order to make the >>> PLplot_Thin >>> bindings disappear to the user. These files contain a few basic >>> declarations >>> used by all of the bindings (which were formerly exported by >>> PLplot_Thin) and >>> some utility procedures used by several of the examples. >>> >> >> Hi Jerry: >> >> I committed your Ada patch. >> > > After committing your patch I tried one further local change; > removing all > references to PLplot_Auxiliary from the examples because both the > thick > and traditional examples refer to Ada interfaces which in turn refer > to PLplot_Auxiliary. > > However, that lead to the following type of error for both thick and > traditional examples for both example 1 and example 12 (example 10 is > so simplistic it doesn't run into the problem): > > xthick01a.dir/xthick01a.o > xthick01a.adb:39:14: "Real_Vector" is not visible (more references > xthick01a.adb:39:14: non-visible declaration at > plplot_auxiliary.ads:26 > xthick01a.adb:39:26: incorrect constraint for this kind of type > xthick01a.adb:43:11: "String_80" is not visible > xthick01a.adb:43:11: non-visible declaration at > plplot_auxiliary.ads:68 > > I presume with the appropriate namespace commands in > plplot_traditional.ad[bs] and plplot.ad[bs] all the PLplot_Auxiliary > definitions (e.g., String_80, etc.) could be > made visible to the examples without having to explicitly mention > PLplot_Auxiliary in the examples. > > In sum, there should be no fundamental need to mention > PLplot_Auxiliary in > the examples but this appears to be currently required because of > namespace > issues. Getting such namespace concerns properly dealt with is > always a bit > of a struggle in any given language, but I hope you are able to > figure this > out for Ada with help from your Ada contacts/references. > > Alan > >. I don't know how to overcome the specific problem that you mention, and I tried to do it. The utility functions can be put into the bindings but that would be sort of polluting the bindings since I included them only for convenience in writing the examples. The C and Fortran examples, I notice, include these functions in the example files themselves--I was just trying to save effort. I would do so, but that still leaves the 2-4 declarations that are used by all of the bindings. These were previously part of the thin binding and with- ed by the thick and traditional bindings, and that with-ing went away at the decision to make the thin binding more or less invisible. They can't be put directly into the thick and traditional binding because the thin binding needs them also. (I don't think Ada lets two packages both "with" each other, but that might have changed with Ada 2005. However, I don't want to require Ada 2005 to run these bindings, as that would cut the user base to approximately zero. The actual standard was just settled in 2007 and so it should be called Ada 2007 (and sometimes is).) I should note that in Ada, a type declaration in one package that is declared exactly the same in another package are actually different types. The main thing that all three bindings need is Real_Vector and Real_Matrix which are supplied by Ada 2005 but not otherwise. That is why I've put in some comments indicating that certain lines (1 or 2 in maybe 5 files) can be commented out or in depending on whether the compiler supplies these types. What bothers me the most about this is that the user may well have a suitable type declaration in his calling program and would appreciate the types declared in PLplot_Auxiliary getting out of the way. I'll scratch my head on this a bit more and maybe ask the gurus at comp.lang.ada. Jerry
http://sourceforge.net/p/plplot/mailman/message/12390764/
CC-MAIN-2014-41
refinedweb
672
61.97
Open Source proxy Open Source proxy Open-source HTTP proxies A HTTP proxy is a piece... browser) and HTTP server software. The proxy receives all requests from the browser... of some form of proxy. The difficulty sometimes arises that it's not easy to find Free Open Source Proxy Servers Developed in Java window.location="freeproxyservers.php"; We have moved to another page go Overcome proxy in Java URL connection connection but i'm not overcome the proxy setting.Please send me the example on how.. New to struts2 New to struts2 Please let me know the link where to start for struts 2 beginners Struts 2 Tutorials how to open a web page using proxy in java and handling button event in that web page..? how to open a web page using proxy in java and handling button event in that web page..? Sir, what i need is a code which uses proxy address... me.. Note: vote for a game can be done only one per a day..after one vote help me help me please send me the java code to count the number of similar words in given string and replace that word with new one connection proxy error in hibernate connection proxy error in hibernate while we get connection through hibernate we arise an error connection proxy error while we are using multiple connection's help me the order, Fourth button is Find length of text. and there are panels. Please help me...); label = new JLabel("Enter Text"); text = new JTextField(20); JButton b1=new JButton("To Uppercase"); JButton b2=new JButton("To Lowercase help me class.Several lines of customer should be displayed. you can add a new customer... help me New to programming... Are you New to Java Programming Language? Learn the how you can Java... development framework. New... fear of pointers... Break the old rhythm. Explore Very new to Java Very new to Java hi I am pretty new to java and am wanting to create a programe for the rhyme 10 green bottles. 10 green bottles standing... actually help me with this that would be great Latitude and Longitude of New Zealand Latitude and Longitude of New Zealand Hi, What is the Latitude and Longitude on New Zealand? Also give me an insight into this beautiful country Thanks Hi, Latitude of New Zealand is:-41 and Longitude of New Open Source Metaverses Open Source Metaverses OpenSource Metaverse Project The OpenSource.... The OpenSource Metaverse Project was created because a strong demand exists, and large... individual proprietary worlds. The key deliverables of the OpenSource Metaverse new new hi i am jane pls explain the difference between string greeting = good morning string greeting = new string ("good morning new new hi the difference between string sname = good morning string sname =new("good morning new new hi i am jane pls explain the difference between heap memory and stack memory new new @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html servers - Java Magazine servers what is the difference between proxy and Gateway  ... to the other network system. Proxy Server *)A proxy server is a kind... of Internet users in a special database, called ?cache?. Therefore, proxy servers Enable Proxy Setting In this section, you will learn to enable proxy setting for Maven using setting.xml file Open Source E-mail Server for businesses. MailWasher Server differs from other open-source server anti-spam... to work hand-in-hand with POP3 e-mail accounts; however, with new features being... and directories. In June, Isode will launch a new version of its standards-based proxy server and client using java - Java Beginners proxy server and client using java how to write program in java for proxy server and client New Delhi to Agra by Train - Timetable New Delhi to Agra by Train - Timetable Hi, can anyone please provide me the timetable of Delhi to Agra Trains..? Thanks What's new in Hibernate 4.3.0 Final? What's new in Hibernate 4.3.0 Final? What's new in Hibernate 4.3.0 Final? I just found that there is new version of Hibernate. These latest version... of Hibernate 4.3.0. Share me the good resource. Thanks The main goal plz tell me CountButtonClicks extends JFrame{ public static void main(String[] args){ new CountButtonClicks(); } JButton button1 = new JButton("Click Me!"); int clickCount... = new JPanel(); button1.addActionListener(new ActionListener Reply Me - Struts Reply Me Hi Friends, I am new in struts please help me How to arrange the folder using oracle 10g database where store web.xml file,jsp file,connection file....etc please let me know its very urgent  > Please help me Please help me Hi Sir, please send me the code for the following... a = new HashSet(); a.add(10); a.add(20); a.add(30); a.add(40); Set b = new HashSet(); b.add(15); b.add(25); b.add(35); Set s=new HashSet(a); s.addAll(b); ArrayList please tell me main(String ar[]) throws IOException{ BufferedReader br = new Bufferedreader(new InputStreamReader(System.in)); System.out.println("Enter a char"); String... BufferedReader br = new Bufferedreader(new InputStreamReader(System.in please tell me is:"+age); } } class Hash{ public static void main(String ar[]){ Person obj1=new... = new Person("sita",20); System.out.println("hash code is:"+obj2.hashCode..., but its compiled and run successfully, please tell me the reason help me - Java Beginners help me i want to create a login form in java with jdbc connection... one button for login and another for new user if anybody click into login it will open a form welcome.if any body click to new user it will open another please tell me ; boolean dataprodover = false; Producer(){ sb = new StringBuffer(); } public void...(String[] args) throws Exception { Producer pr = new Producer(); Consumer con = new Consumer(pr); Thread t1 = new Thread(pr help me - Java Beginners help me helo guys can you share me a code about Currency Conversion.Money will convert according to the type of currency.would you help me please... into 2 decimal places sample 1.25) Mexican_________ guys help me..thank provide me the program for that ques. provide me the program for that ques. wtite a program in java there is a class readchar.the functionality of this class is to read a string from... GetCharacters{ public static void main(String[] args){ Scanner input=new Please help me?? Please help me?? Question_1: Using one-dimension array of primitive... elements. Instructions: 1. Create a new file ArrayDemo.java 2. Declare... element in array2 in index 0,2,4 and 6 to have new values such as 0,2,4 and 6 help me 2 help me 2 write java program to enter five numbers and will determine the location of the number sample output: ENTER 5 NUMBERS: 1 2 3 4...) { Scanner input=new Scanner(System.in); int array[]=new int[5 please tell me (String ar[]){ Person Raju=new Person(); System.out.println("Hashcode...: '}' expected ^ 2 errors, i got this error....... please tell me the resolution...{ public static void main(String ar[]){ Person Raju=new Person please tell me ar[]){ Sample s = new Sample(10.6,20.0); s.Sum(); } } if i compile this D:\progs... : constructor Sample(double,double) location: class Sample Sample s = new Sample(10.6,20.0); ^ 1 error, please tell me the resolution for this...   please tell me { public static void main(String ar[]){ Reserve re = new Reserve(1); Thread t=new Thread(re); Thread t1= new Thread(re); t.setName("first person"); t1.setName...; ^ please tell me the resolution class Reserve Reply Me - Java Beginners Reply Me Hi I am sending some code please check it and solve... table and reply me fast...I got two error 1 is catch is without try 2 is try...; java.text.SimpleDateFormat dd = new java.text.SimpleDateFormat("dd Plz Help Me { JButton b1, b2, b3; Signal green = new Signal(Color.green); Signal yellow = new Signal(Color.yellow); Signal red = new Signal(Color.red...().setLayout(new GridLayout(2, 1)); b1 = new JButton("Red"); b2 = new please help me please help me interface Test1 { String toString(); } public class Test { public static void main(String[] args) { System.out.println(new Test1() { public String toString() { return "test Can someone help me with this? Can someone help me with this? I have this project and i dont know how to do it. Can someone help me? please? Write a java class named "PAMA..., Multiply, Divide) Help me please! Thanks in advance! import Reply me - Java Beginners in the database... if u understood my question then then please send me code oterwise tell me its urgent package services; import java.sql.Connection; import...[]){ Test1 t = new Test1(); t.search();} } Hello Ragini Reply Me - Java Beginners form have two button new and refresh user click new button then open the popup.... * JSP * Servlets * Struts 1 * Struts 2 or JSF? Please tell me please tell me ); } } class Hash{ public static void main(String ar[]){ Person raju=new Person...()); Person sita = new Person(); raju.name="sita"; raju.age=20; raju.talk please tell me = DriverManager.getConnection("jdbc:odbc:student"); File imgfile = new File("C:/rose.jpg"); FileInputStream fin = new FileInputStream(imgfile); PreparedStatement pre..."); if(rs.next()){ byte[] bytearray = new byte[1048576]; int size=0; sImage answer me this program answer me this program Java Code to Maintain student record,Name... average; } public static void main(String[]args){ Scanner input=new Scanner(System.in); StudentExample data[]=new StudentExample[2]; for (int i=0; i<data.length pls help me!!!!!!! how to implement the push operation how to implement the push operation to get a new screen for developing iphone application In iPhone applications "PUSH NOTIFICAION" notify users when a specific update occurs Please find me a solution that Please find me a solution that Write a java program that calculates the area of rectangular, square, triangular, and circle. Your program...; String str=" "; Scanner input=new Scanner(System.in); do please tell me GetXmlHttpObject(){ var xmlHttp=null; try{ xmlHttp=new XMLHttpRequest(); } catch (e){ try { xmlHttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch (e){ xmlHttp=new ActiveXObject("Microsoft.XMLHTTP"); } } return xmlHttp; } </script> Hi .Again me.. - Java Beginners Hi .Again me.. Hi Friend...... can u pls send me some code...... REsponse me.. Hi friend, import java.io.*; import java.awt.... args[]){ JFrame frame = new JFrame("Button Group program to create an http proxy server that runs top on netty program to create an http proxy server that runs top on netty program to create an http proxy server that runs top on netty please explain me the flow of this program please explain me the flow of this program class Bticket extends... static void main(String[] args) { Object tr = new Object(); Object compartment =new Object(); Bticket bt=new Bticket(tr,compartment Proxy Setting in Windows 8 for IE in connecting to the internet, all you need is to do the proxy settings using your browser. Here we will learn to do the proxy settings in Internet explorer.  ...' and 'Use a proxy server for your LAN' box below. 6 Could someone Possibly help me with this Could someone Possibly help me with this think of a number... NewClass1{ public static void main (String args[]){ int randomNumber = new...); Scanner input=new Scanner(System.in); System.out.print("Enter number to guess SQLException:Column not found? (help me:( SQLException:Column not found? (help me:( import javax.swing....); setVisible(true); c=getContentPane(); c.setLayout(new FlowLayout()); lblCode=new JLabel("Stock Code:"); lblName=new New Problem in Form Resizing - Swing AWT New Problem in Form Resizing Hi, I am using a MDI application which have a menu item New which adds My New JInternalFrame form to the desktopPane...): jScrollPane1 = new javax.swing.JScrollPane(); EditorPane = new javax.swing.JTextPane Can you help me with applet? Can you help me with applet? I'm very new to applet, and i'm weak... me with this: import java.io.*; import java.util.*; import javax.swing.*; public class fortoys { public static void main(String args[]) { new hep me - Java Interview Questions MergeLinkedLists{ public static void main(String[] args) { LinkedList list1=new LinkedList(); list1.add(1); list1.add(2); LinkedList list2=new LinkedList(); list2.add(3); list2.add(4); LinkedList link=new LinkedList its urgent for me - Development process its urgent for me Hi All, this is kalavathi,i am new... is the problem .can u tell me i will send my total code and error exception also. import... exampleWork = new RetryingTransactionCallback() { public Object execute() throws help me plz befor 27 februry help me plz befor 27 februry what is rung with this ?? help me...; System.out.println("Hello,what is your first name?"); Scanner keyboard = new Scanner... last name?"); Scanner keyboard = new Scanner (System.in); please give me a java program for the following question please give me a java program for the following question Write... class MultipleFrame{ public static void main(String[] args){ JFrame f = new JFrame(""); f.setLayout(new FlowLayout()); JButton button1=new JButton("Fruit help me fix this code - MobileApplications Please help me fix this code Please help me in this area of code...************************************************************************ //welcome = new TextBox("Welcome (Set your trip...*********************************** mList = new List("All Trip Details Please help me about maven2 plugin - Maven Please help me about maven2 plugin First, thanks you for your all... for new features to install. And then i click the new remote site, typing Maven2Plugin... an error. I send you the image i capture the error Please help me please eslain me the flow of execution of this program please eslain me the flow of execution of this program class...(){ sb = new StringBuffer(); } public void run(){ for(int i=1;i<=10;i++){ try...) { Producer pr = new Producer(); Consumer con = new Consumer(pr verify the code and give me the code with out errors verify the code and give me the code with out errors import...; JTextField l=new JTextField(); JLabel l1=new JLabel("REG.NO"); JTextField t=new JTextField(6); JLabel l2=new JLabel("NAME"); JTextField t1=new JTextField java, plz help me in doing this - Java Beginners java, plz help me in doing this # Write a small record management...=new JFrame("Form"); p=new JPanel(new GridLayout(2,1)); p1=new JPanel(new GridLayout(5,2)); p2=new JPanel(new GridLayout(5,2)); p3=new JPanel(new GridLayout please help me to give code - Java Beginners please help me to give code Write a program that prints an n-level stair case made of text. The user should choose the text character...(String[] args) throws IOException{ BufferedReader buff = new BufferedReader(new Remember me on this computer (via cookies) in jsp servlet Remember me on this computer (via cookies) in jsp servlet Hi, Here... userIdendificationKey="dxkdyneimd@5786*54904"; Date now = new Date(); String timestamp Send me Binary Search - Java Beginners Send me Binary Search how to use Binary think in java give me...) throws IOException{ BufferedReader buff = new BufferedReader(new InputStreamReader... = Integer.parseInt(buff.readLine()); Integer []a = new Integer[SIZE please help me to give code - Java Beginners please help me to give code Write a program that reads a file named... main(String[] args) throws IOException{ BufferedReader buff = new BufferedReader(new InputStreamReader(System.in)); System.out.println("Enter txt file name please help me for this program - Java Beginners please help me for this program Write a program that asks the user for a starting value and an ending value and then writes all the integers...){ System.out.println("Enter Start:"); Scanner input=new Scanner(System.in help me plz - Java Interview Questions help me plz 1)write a java program that prompts the user to input...(String[]args){ System.out.println("Enter Decimal Number: "); Scanner input=new... InputDecimalNumber{ public static void main(String[]args){ Scanner input=new (help me) use wait() and notify() method in Thread. (help me) use wait() and notify() method in Thread. we have... implements Runnable { Multithreading th; UUM(Multithreading th) { this.th = th; new... (Multithreading th) { this.th = th; new Thread(this, "Sintok ").start(); } public Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/20663
CC-MAIN-2015-32
refinedweb
2,695
66.64
i have an iOS app and i want to use in the native code. Is there any way to make the particle photon in listen mode without pressing the listening mode button the the device from the app? WiFi listen mode i have an iOS app and i want to use There are at least two other ways to force listening mode (blinking blue), which enables the SoftAP. If you never configure Wi-Fi credentials on the device, it will boot into listening mode. You can clear them by holding down SETUP until the status LED blinks blue, then continue to hold down SETUP until it blinks blue rapidly (about 10 seconds longer). If you call WiFi.listen() from your code, you will go into listening mode as well. You can’t just do it from a mobile app unless you have some other channel to invoke it, for example if both the device and phone are online you could trigger listening mode with a Particle.function. I am connected to photon i have all the details of the device now i want to make the device go into listening mode i tried from the SDK of Particle but i wasnt able to find the listening function. How do i call the listen function. Note: I am connected to the device. I just want to make it go into listen mode so that i can change its wifi. How about this- This is how it runs in iOS objective C. I cannot directly use- This does not work. it gives me 404 error Now how to use this in iOS? Because there is no library where there is WiFi function declared WiFi.listen(); You need to expose a Particle.function() which in turn calls WiFi.listen() or rather sets a flag which is checked in loop() to engage Listening Mode. This is what @rickkas7 meant when he said There is no Particle. Hence i cannot call Particle.function() This is the iOS SDK There are 2 parts to what you’re doing… (1) the Photon firmware and (2) the iOS app. Particle.function() exists in the device firmware (NOT the iOS SDK). You call Particle.function() in the firmware to expose a function. In Objective C, you use this example to call that function that you created in the firmware. Here is a link to the Particle.function() in the firmware reference: Particle.function() What do i need to import in order to call WiFi.listen() because it is giving me error no matching function for call to 'CloudClass::_function(void (&)())' i have already imported #include "Particle.h" #include "softap_http.h" You need to show the code line that causes the error. This error message does not indicate the function wasn’t found (imported) but that you have been using it wrong. By looking at the error message alone I’d deduce that you wrote something like this Particle.function(WiFi.listen()); Right? Consequently (and by the over-all history of this thread) I guess you haven’t had a close look at the docs @ninjatill linked above in regards of how to use Particle.function() or to develop the firmware for Particle devices in general. #include "Particle.h" #include "softap_http.h" SYSTEM_MODE(MANUAL); SYSTEM_THREAD(ENABLED); void setup() { WiFi.on(); WiFi.clearCredentials(); Particle.function("asdad",tryingToConnectToWifi); } void tryingToConnectToWifi(String FCommands) { WiFi.listen(); } void loop() { } How about this (from the docs sampel) bool success = Particle.function("funcKey", funcName); // Cloud functions must return int and take one String int funcName(String extra) { return 0; } Your function is a void fn(String) but Particle.function() expects an int fn(String) and this function has to feature a return someInt; line. That’s what the error message complains about no matching function for call to 'CloudClass::_function( void (&)())' However, after a WiFi.clearCredentials() and without Particle.connect() in SYSTEM_MODE(MANUAL) your device will never connect to the cloud to actually make it available for a remote caller, so the over-all logic of this project is somewhat obscure. This is also what @rickkas7 meant when he said Also the two options (clearing credentials or Particle.function()) were meant as alternative ways to achieve the same thing (depending on need), not both unconditionally. To cut things short I’d rather go with someting like this #include "Particle.h" #include "softap_http.h" SYSTEM_MODE(SEMI_AUTOMATIC); SYSTEM_THREAD(ENABLED); bool enterLM = false; void setup() { Particle.function("startListen", enterListeningMode); Particle.connect(); } void loop() { if (enterLM) { enterLM = false; WiFi.listen(); } } int enterListeningMode(String dummy) { enterLM = true; return 1; } Then you can engage LM with this NSURLSessionDataTask *task = [myPhoton callFunction:@"startListen" withArguments:@[@"",@1] completion:^(NSNumber *resultCode, NSError *error) { if (!error) { NSLog(@"device should be in Listening Mode now"); } }]; or via console.particle.io/devices Thank you guys a lot. @ninjatill @rickkas7 @ScruffR all you guys thank you for being so supportive and patient. It worked.
https://community.particle.io/t/wifi-listen-mode/41899/10
CC-MAIN-2019-04
refinedweb
813
66.44
This article gives a quick view in to the problems faced with the value type objects and the solution to the problems. This beautiful facility is introduced by Microsoft as an concept called “Nullable Types” in the C# 2.0. Summary: The Value type objects are faster for the accessing and operations as compared to their reference counterpart. This makes them as a preferred choice for the data storage in the program. However there are certain limitations while using the value type objects in the application. This article gives a quick view in to the problems faced with the value type objects and the solution to the problems. This beautiful facility is introduced by Microsoft as an concept called "Nullable Types" in the C# 2.0. In this article I will focus to answer two questions mainly those are: a. Why need Nullable type. 1. What will be the default value for the value type object? Assigning a default value to a value type object is really a brainstorming process. Since this involves the use of the probability matrix, through which one can evaluate the possible values that a value type object can be assigned during it's whole life span. Apart from finding the values that a object can have during it's life span we also need to lay down a standard to consider some arbitrary value that lie outside the probability matrix; as the default value. This forces one rule on the project standards to consider the designated value as a default value and then force to follow the rule on the people working on the project. Apart from this working and forced standards limitation; this default value occupies the space in the value matrix as default and unusable value limiting the future value for the object. Let's focus this with an example. I have a structure about the student information I can write the structure as: public struct student{ private int m_RollNumber; // Can range from 0 -- 100 private string m_FullName ; // the non blank string name. .. public student( int RollNumber, string FullName,.....){ //Initialize the members of the new object this.m_RollNumber = RollNumber ; this.m_FullName = FullName; } Lets Say I decide the value 999 as my default for the roll number; I need to force this rule to check the value being 999 before any body use the m_RollNumber in the code, and also the rule to think the value as 999 as default and denoting the un-initialized m_RollNumber. Thinking of the future impact when the roll number reaches the value 1000, the value 999 will be useless even though being a valid roll number. 2. The Functions returning the Value type object can not denote about the success of the execution: Consider I have a collection of students where I search for any particular roll number. public class Students { //The Internal array list maintaining the student list private ArrayList m_Students = new ArrayList(100); public Student GetStudentInfo(int RollNumber){ //Iterate through the list to find the student with roll number RollNumber foreach( student stTemp in m_Students){ //Check if the stTemp is the required student. If( stTemp.m_RollNumber == RollNumber){ //if yes return the stTemp. Return stTemp ; } } //If no match found return the empty object. Return new Student(); } To work around this problem we need to go for the 'Out' parameters. The modified version of the function is as follows: public bool GetStudentInfo( int RollNumber, out Student outStudent){ //Initialize the outStudent object to a blank object. outStudent = new Student(); //Iterate through the loop to find the student with roll number as RollNumber foreach( student stTemp in m_Students) If( stTemp.m_RollNumber == RollNumber) //If the required object found then //Save it to the out parameter and outStudent = stTemp ; //return true to indicate the success. Return true; //If object is not found then simply return the false to denote the failure Return false; To overcome the above mentioned problems with the default value and the return value from the function. We need some tactful way around this problem. That's the reason Microsoft has introduced the Nullable Types in the C# 2.0. b. What are the Nullable types? Nullable Types are the same generic data type with a wrapper structures written around the generic data type. This wrapper structures allow assigning the value "null" to the value type objects even. If any object of the generic type has the '?' attached to the data type while declaring the value object. It denotes the object is a Nullable object of that data type e.g. if I declare the RollNumber as int The RollNumber declared above is Nullable roll number that means assigning the value null to the Rollnumber is valid and completely acceptable. This reduces the burden to make some value as default and also since the function can return the Nullable types as a return type we have a way to return a null value through the object when the function fails. An instance of the Nullable type has two public properties. The 'true' value for the HasValue property guarantees the validity of the value contained in the object which can be accessed through the Value property. A 'false' value for the HasValue indicates the invalid value contained by the object suggesting not to use the Value property. An instance for which HasValue is false is said to be null. A null instance has an undefined value. Attempting to read the Value of a null instance causes a System.InvalidOperationException to be thrown. Taking about the default values for the Nullable type we can say; Any instance of a Nullable type is said to have a default value when the HasValue is set to 'false' and the value property is undefined. This value is known as an null value of an Nullable type. While implicitly converting the 'null' value to any nullable type values the conversion results in the null value of the nullable type. The following example shows the use of the Nullable type. class Program //the Nullable Roll number. public int? RollNumber = null; static void Main(string[] args) { Program pTemp = new Program(); //use the public member. pTemp.RollNumber = 100; //Write to console to check the object values. Console.WriteLine("Value in RollNumber is:{0} ,[HasValue]{1}", pTemp.RollNumber.Value, pTemp.RollNumber.HasValue); //test using the null. pTemp.RollNumber = null; Console.WriteLine("Value in RollNumber is:{0} ,[HasValue]{1}", pTemp.RollNumber.Value, pTemp.RollNumber.HasValue); //This line throws and InValidOperationException. } View All View All
https://www.c-sharpcorner.com/article/nullable-types-in-C-Sharp-basic-introduction/
CC-MAIN-2021-43
refinedweb
1,074
54.22
It’s a gas, gas, gas… OK, so the title’s rubbish. I’m open to suggestions for a better one which includes the word ‘gas’ or ‘meter’. Except the Beatles song ‘From me to you’, of course, because that would be a dreadful pun. And so to the point, which is that this blog post talks about using Gadgeteer kit to periodically read a gas meter, and send the data to a server for further use. The gas bills for my house are, frankly, enormous. Not so unusual for a twenty-five bedroom mansion, of course, but I don’t live in one of those. And I’m not downsizing for anybody. I’d like to know when the gas is being used, so that I can optimise the times that the central heating is on, but that means reading the meter regularly. I could simply go to the cupboard under the stairs and look, but where’s the fun in that? It’s clearly a much better option to build some electronic gadget which will do it for me, and in doing so raise my electricity usage. It’s important to be clear about one thing. Gas (and electricity) suppliers do not like you attaching devices to their meters. For one thing, they always suspect foul play and that you are trying to slow down or reverse the meter. For another, it’s dangerous. Gas is explosive, and you really don’t want even the slightest possibility of an electrical spark in its vicinity. Some gas meters come equipped with built in contacts which pulse a switch after every unit of gas is used, but wiring up to them is not a good idea (and may well be illegal, for all I know) unless the equipment you are connecting is specially designed. So I don’t do that. My approach to reading the meter is completely non-contact. Using the Gadgeteer camera, a photo is taken at regular intervals, and sent to a server for processing. Yes, it’s ridiculous technological overkill, but it is safe and legal. There are some technical hurdles to overcome, of course. Firstly, my gas meter is in a distinctly insalubrious cupboard under the stairs (at least that whiny Potter boy has moved out now), and it’s dark. This means that lighting is required as well as the camera. Secondly, the image has to leave the Gadgeteer system and get to a server, but oddly enough I don’t have a network point in the under-stair cupboard, so wireless data transmission is required. WiFi would be the obvious way to do this, but I don’t have a Gadgeteer WiFi board, so I’m using XBee instead. The hardware For this project, I’m using five Gadgeteer elements: the Sytech Nano main board, the Sytech Serial Camera, a GHI dual power supply board, the GHI 7 LED board and a Sytech XBee module. In addition, a couple of white LEDs and a couple of 180 Ohm resistors are needed. Oh, and a 5V power supply. In the absence of a 3D printer to print housings for the stuff, I’m also using some cheap plywood, glue and cable ties to hold everything together. If you have been following my previous blog posts (and why wouldn’t you?), the only unfamiliar component will be the XBee board. The board itself is little more than a simple carrier for the XBee module, which is a self-contained low power IEEE802.15.4 wireless networking node. The poorly-mounted capacitor you see sticking out of the side of the board is my addition, about which I will comment later. XBee modules can operate in a variety of modes, including mesh networking, which is undoubtedly cool but not supported by the older nodes I possess, and broadcast or point-to-point communication. For my purposes, the node at the meter reader end is simply going to broadcast its data to anything that will listen. I’ve then got a second XBee node on a USB carrier plugged in to a proper computer to receive the data. In the Gadgeteer world, the XBee device appears as a serial port – you send data to it, and it gets broadcast. Incoming data can be read as you would any other serial stream. You can effectively forget that it’s a radio transceiver, and just use it as a serial cable. So, to the Visual Studio project. The usual setup applies; here is a picture of the design surface once the components are added: You’ll notice that the XBee component doesn’t look too good. That’s because the manufacturer hasn’t fully implemented the component. The driver works, though, and that’s the main thing. The code is a simple timer-based loop. It initialises the camera (and waits for it to be ready) then starts a timer. The timer ticks every so often, and keep track of when the last picture was taken. When enough time has passed, it takes a new picture, gets the data as a byte array and sends it out over the XBee serial port, adding a small header so that the receiver knows it’s a new picture, and a byte count so that the receiver knows what to expect. A byte count is a good idea, because radio transmission is liable to interference, and data can go missing or get corrupted. Some form of reliable data transfer, with checksums and acknowledgements would be better, but that’s too much like hard work to implement. When I get hold of a WiFi Gadgeteer board, I’ll use TCP and not worry about it. While the system is waiting to take a picture, it periodically flashes the red LED on the LED7R board, and while it is getting the data from the camera, it animates the other 6 green LEDs, to show progress. there is no other display or means of showing debugging info on this project, so it’s useful to have some reassurance that it is still running. The code is below: using System; using System.Collections; using System.Threading; using Microsoft.SPOT; using Microsoft.SPOT.Presentation; using Microsoft.SPOT.Presentation.Controls; using Microsoft.SPOT.Presentation.Media; using Microsoft.SPOT.Touch; using Microsoft.SPOT.Hardware; using Gadgeteer.Networking; using GT = Gadgeteer; using GTM = Gadgeteer.Modules; using Gadgeteer.Modules.Sytech; using Gadgeteer.Interfaces; using GHIElectronics.NETMF.IO; using GHIElectronics.NETMF.System; using Gadgeteer.Modules.GHIElectronics; namespace GasMeterReader { public partial class Program { Gadgeteer.Modules.Sytech.XBee xbee; GT.Timer pictureTimer = new GT.Timer(1000); bool isProcessing = true; Serial xbeePort; int timeBetweenPictures = 20;// seconds int timeTillNextPicture = 10; void ProgramStarted() { xbee = new XBee(5); xbee.Configure(115200, GT.Interfaces.Serial.SerialParity.None, GT.Interfaces.Serial.SerialStopBits.One, 8, false); camera.CameraReady += new SerialCamera.CameraEnabledEventHandler(camera_CameraReady); camera.CameraPictureReady += new SerialCamera.CameraEventHandler(camera_CameraPictureReady); camera.OnPictureProgess += new GTM.Sytech.Camera.PictureProgressDel(camera_OnPictureProgess); camera.EnableCamera(); xbeePort = xbee.GetPort; if (!xbeePort.IsOpen) xbeePort.Open(); xbeePort.WriteLine("Program starting"); pictureTimer.Tick += new GT.Timer.TickEventHandler(pictureTimer_Tick); isProcessing = true; pictureTimer.Start(); } void pictureTimer_Tick(GT.Timer timer) { if (isProcessing) return; timeTillNextPicture = timeTillNextPicture-1; if (timeTillNextPicture == 0) { timeTillNextPicture = timeBetweenPictures; isProcessing = true; SetLedPercent(100); pictureTimer.Stop(); camera.TakePicture(); } else PulseRedLed(); } void PulseRedLed() { lED7R.TurnLightOn(7); Thread.Sleep(200); lED7R.TurnLightOff(7); } void SetLedPercent(int percent) { for (int led = 1; led < 7; led++) { if (percent < (led * 16)) lED7R.TurnLightOff(led); else lED7R.TurnLightOn(led); } } void camera_OnPictureProgess(object sender, GTM.Sytech.Camera.ProgressEventArg arg) { int percent = (arg.blockReceived * 100) / arg.Blocks; SetLedPercent(percent); } void camera_CameraReady(SerialCamera sender, GTM.Sytech.CameraProtocol.ImageSize resolution) { camera.Resolution = GTM.Sytech.CameraProtocol.ImageSize.svga; isProcessing = false; } void camera_CameraPictureReady(SerialCamera sender, GTM.Sytech.Camera.ImageEventArg CameraImage) { byte[] outputBuffer = CameraImage.GetImageDataBuffer(); Debug.Print("CameraImage.GetImageDataBuffer()"); xbeePort.WriteLine("BITMAP"); UInt32 crcValue = Utility.ComputeCRC(outputBuffer, 0, -1, 0); byte[] header = new byte[4]; header[0] = (byte)(outputBuffer.Length & 0xFF); header[1] = (byte)((outputBuffer.Length >> 8) & 0xFF); header[2] = (byte)((outputBuffer.Length >> 16) & 0xFF); header[3] = (byte)((outputBuffer.Length >> 24) & 0xFF); xbeePort.Write(header); xbeePort.Write(outputBuffer, 0, outputBuffer.Length); byte[] crc = new byte[4]; crc[0] = (byte)(crcValue & 0xFF); crc[1] = (byte)((crcValue >> 8) & 0xFF); crc[2] = (byte)((crcValue >> 16) & 0xFF); crc[3] = (byte)((crcValue >> 24) & 0xFF); xbeePort.Write(crc); xbeePort.WriteLine("END"); xbeePort.Flush(); pictureTimer.Start(); isProcessing = false; } } } And that’s it. The receiver is a WPF application, and is not very complex – it listens for the start of the bitmap message, reads the byte stream into an array and then creates an image from it (the data is already in jpg format). It’s then easy to display that in an Image control. My next task is to do some processing on the image to isolate the numbers, and store the actual meter reading in a database. I haven’t done that yet. I’d like to do it in the Gadgeteer device itself, but I’m not convinced that its practical. At least the speed of processing is not an issue, because the meter does not need to be monitored every second! Here’s what the finished article looks like. Note the craftsmanship on the woodwork and the elegant use of hot-melt glue. I’m particularly proud of the bright orange cable ties used to hold everything because I’m too cheap to buy a bag of 3mm screws and nuts. Note that the 7-LED rosette was not mounted when I took these photos. Front view (as seen by gas meter) Back view (note external power supply cable and wiring going to illumination LEDs) And here it is in place, all alone in the dark: As soon as I get the recognition part of the receiver software working properly, I’ll post an update – and they you’ll be able to follow a live feed of my gas consumption. I bet you can’t wait. Hello, Nice, I like the project. I have been tempted to do the same for my electricity ! Have you considered using project hawaii for the OCR bit? Steven (.NET Gadgeteer team) I wasn’t aware of project hawaii. It looks interesting, though. My current plan is to do the OCR on the Nano board itself, because the image is highly constrained – there is a nice black rectangle containing four white characters (and two smaller ones containing red numbers), each of which can only be one of ten numbers. This simplifies the issue enormously. The image processing will still be very slow, but that hardly matters because the meter doesn’t change that quickly (though quicker than I would like, otherwise I wouldn’t be doing the project). I’ve not quite finished the OCR yet, but I’ll post it when I do. Hi Peter, Cool work. How many frames per second do you think you can transmitte with it? Thanks, Sean it’s not really in the ‘frames per second’ league. I’m sending one frame every thirty seconds at the moment (though the actual data transfer takes only a few seconds). When I’ve finished doing the OCR inside the device, it will probably be nearer one frame a minute. That’s good enough for my purposes, but you wouldn’t want to try and film a humming bird with it
http://robinsonia.com/wp/?p=124
CC-MAIN-2015-06
refinedweb
1,878
58.18
Thanks to all. The problem was the static hashmap. I removed the static key= word to the variable and problem solved. ----- Mensaje original ----- De: "Christopher Schultz" <chris@christopherschultz.net> Para: "Tomcat Users List" <users@tomcat.apache.org> Enviados: Lunes, 1 de Agosto 2011 20:11:27 Asunto: Re: Problem with threads in stage Service (Tomcat 7.0.14) -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Alejandro, On 7/29/2011 11:55 AM, Alejandro Henao Gonz=C3=A1lez wrote: > public class HTMLEncoder { private static Map mapChar2HTMLEntity; > > private final static char [] characters =3D { > '=C3=A1','=C3=BA','=C3=B3','=C3=A9','=C3=AD','=C3=B1','=C3=81','=C3=9A','= =C3=93','=C3=89','=C3=8D','=C2=B0','=C3=BC' }; private final > static String[] entities =3D { > "á","ú","ó","é","í","ñ","&Aacut= e;","Ú","Ó","É","Í","°","ü" > }; > > public HTMLEncoder() { mapChar2HTMLEntity=3D new HashMap(); int > longueur =3D characters.length; > > for (int i =3D =3D s.length(); final > StringBuffer sb =3D new StringBuffer(longueur * 2); Big buffer every time? > char ch; > > for (int i =3D0; i < longueur ; ++i) { ch =3D s.charAt(i); > > if ((ch >=3D 63 && ch <=3D 90) || (ch >=3D 97 && ch <=3D 122)) sb.append(= ch); > else { String ss =3D (String)mapChar2HTMLEntity.get(new > Character(ch)); New Character object every time? Mmm. You might want to choose a more optimized storage system so you don't have to create a new object every time. > if(ss=3D=3Dnull) =3DUWQu -----END PGP SIGNATURE----- --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org For additional commands, e-mail: users-help@tomcat.apache.org
http://mail-archives.apache.org/mod_mbox/tomcat-users/201108.mbox/%3c5dde104b-8fb2-4aaa-b493-414a2d4d887f@correoDQ%3e
crawl-003
refinedweb
261
59.6
I've read some conflicting advice on the use of assert setUp import unittest class MyProcessor(): """ This is the class under test """ def __init__(self): pass def ProcessData(self, content): return ['some','processed','data','from','content'] # Imagine this could actually pass class Test_test2(unittest.TestCase): def LoadContentFromTestFile(self): return None # Imagine this is actually doing something that could pass. def setUp(self): self.content = self.LoadContentFromTestFile() self.assertIsNotNone(self.content, "Failed to load test data") self.processor = MyProcessor() def test_ProcessData(self): results = self.processor.ProcessData(self.content) self.assertGreater(results, 0, "No results returned") if __name__ == '__main__': unittest.main() F ====================================================================== FAIL: test_ProcessData (__main__.Test_test2) ---------------------------------------------------------------------- Traceback (most recent call last): File "C:\Projects\Experiments\test2.py", line 21, in setUp self.assertIsNotNone(self.content, "Failed to load test data") AssertionError: unexpectedly None : Failed to load test data ---------------------------------------------------------------------- Ran 1 test in 0.000s FAILED (failures=1) The purpose of setUp is to reduce Boilerplate code which creates between the tests in the test class during the Arrange phase. In the Arrange phase you: setup everything needed for the running the tested code. This includes any initialization of dependencies, mocks and data needed for the test to run. Based on the above paragraphs you should not assert anything in your setUp method. So as mentioned earlier; If you can't create the test precondition then your test is broken. To avoid situations like this Roy Osherove wrote a great book called: The Art Of Unit Testing ( For a fully disclosure Lior Friedman(He was Roy's boss) is a friend of mine and I worked closely with them for more then 2 years, so I am little bit biased...) Basically there are only a few reasons to have an interaction with external resources during the Arrange phase(or with things which may cause an exception), most of them(if not all of them) are related in integration tests. Back to your example; There is a pattern to structure the tests where you need to load an external resource(for all/most of them). Just a side note; before you decide to apply this pattern make sure that you can't has this content as a static resource in your UT's class, if other test classes need to use this resource extract this resource into a module. The following pattern decrease the possibility for failure, since you have less calls to the external resource: class TestClass(unittest.TestCase): def setUpClass(self): # since external resources such as other servers can provide a bad content # you can verify that the content is valid # then prevent from the tests to run # however, in most cases you shouldn't. self.externalResourceContent = loadContentFromExternalResource() def setUp(self): self.content = self.copyContentForTest() Pros: Cons:
https://codedump.io/share/FFHqzJYOkw8/1/what-is-the-correct-way-to-report-an-error-in-a-python-unittest-in-the-setup-method
CC-MAIN-2018-22
refinedweb
455
53.71
18 March 2011 14:56 [Source: ICIS news] LONDON (ICIS)--Crude prices fell more than $1/bbl on Friday to take Brent crude on ICE Futures close to $113/bbl, as ?xml:namespace> The Brent front-month-contract had reached a high of $117.29/bbl earlier in the day as investors feared more violence in the Middle East after reports emerged that the United Nations had authorised military action against In addition, By 14:45 GMT, May Brent crude had hit a low of $113.07/bbl, a loss of $1.83/bbl from the Thursday close of $114.90/bbl, before recovering to around $114.50/bbl. At the same time, April NYMEX light sweet crude futures was trading around $101.20/bbl, having hit a low of $100.32/bbl, a loss of $1.10/bbl from the previous
http://www.icis.com/Articles/2011/03/18/9445341/crude-futures-fall-more-than-1bbl-on-libya-ceasefire.html
CC-MAIN-2014-42
refinedweb
142
76.42
In this article I will describe how to use an autocomplete textbox with a value coming from a database using Jquery. DescriptionIn my previous related article I have explained how to use an autocomplete textbox using Jquery with a hardcoded value. Here I will explain, when typing into a textbox, how the value will come from a database table.In my application I have used the Entity Framework for database creation.Now in my MVC3 application I have a model class named "student.cs" where I will store some student information. public class Student { public int StudentID { get; set; } public string LastName { get; set; } public string FirstMidName { get; set; } public DateTime EnrollmentDate { get; set; } }Also I have a controller class named "studentController.cs" under the controller folder.Now for the studencontroller class I have an opening page in razor named "index.cshtml".In this page I have a textbox where the user will enter the "firstname" of the student and it will give like autoextender features.See the following code where I am catching the user data:@using (Html.BeginForm()) { <p> Find by name: @Html.TextBox("SearchString") <input type="submit" value="Search" /></p> }Now we have to handle the textbox enter event in our student controller class.For that we have to write a particular ActionResult name "AutocompleteSuggestions" in our "studentController.cs" class.See the following code:public JsonResult AutocompleteSuggestions(string searchstring){ private SchoolContext db = new SchoolContext(); var suggestions = from s in db.Students select s.FirstMidName ; var namelist = suggestions.Where(n => n.ToLower().StartsWith(searchstring.ToLower())); return Json(namelist, JsonRequestBehavior.AllowGet);}See here I have used "Jsonresult" instead of "ActionResult". That means the return data will be in a json format.The searchstring is the user input text in the textbox.Here my "SchoolContext" is the Entityframework database name.After that I query the "Student" model class and get the firstname.After that I query the "suggestions" variable with Lamdaexpression and check the "StartWith" method (marked as yellow) with our user given searchstring.Finally we are returning with Jsonformat.Next Step:Now we have to write the Jquery code for the autocomplete extender property in to our "indesx.cshtml" file. Please see the following code:<script src="../../Scripts/jquery-ui-1.8.11.js" type="text/javascript"></script><script src="../../Scripts/jquery-ui-1.8.11.min.js" type="text/javascript"></script><script type="text/javascript"> $(function () { $("#SearchString").autocomplete({ source: "/Student/AutocompleteSuggestions", minLength: 1, select: function (event, ui) { if (ui.item) { $("#SearchString").val(ui.item.value); $("form").submit(); } } }); });</script>That's it. Now after running the application it will look like the following image:I have typed "s" and it is displaying all students beginning with 's' in the first name in the autocomplete extender property.This example is made of VS2010, with Entity Framework.ConclusionSo in this article we have learned how to use an autocomplete textbox with a value coming from the database using Jquery.So try this out…It's a cool feature. View All
https://www.c-sharpcorner.com/UploadFile/b19d5a/autocomplete-textbox-using-database-return-value-in-Asp-Net/
CC-MAIN-2019-39
refinedweb
499
50.84
Hi Folks, Wondering if a Timestamp of a tag can be forced using scripts or any other methods? I want to read a timestamp from OPC, separate to another Boolean value from OPC. These 2 correlate, I want to write the Boolean to a memory tag, and want to force its timestamp to be the timestamp I read over OPC. I know this can be done using a module and would need this developed, just looking for an alternative. When I do a system tag write it creates a Custom Timestamp but I don’t believe this would be applied to an alarm, it will use actual tag timestamp. Hi Folks, You can try writing a QualifiedValue directly; something like: from com.inductiveautomation.ignition.common.model.values import BasicQualifiedValue paths = ["path/to/tag"] values = [1] qvs = [BasicQualifiedValue(v) for v in values] for qv in qvs: qv.timestamp = system.date.getDate(2020, 11, 25) system.tag.writeBlocking(paths, qvs) “from com.inductiveautomation.ignition.common.model.values import BasicQualifiedValue paths = [“path/to/tag”] values = [1] qvs = [BasicQualifiedValue(v) for v in values] for qv in qvs: value.timestamp = system.date.getDate(2020, 11, 25) system.tag.writeBlocking(paths, qvs)” Thanks @PGriffith, This worked, Only thing I had to change was value.timestamp to qv.timestamp. One to note for anyone else needed this is January is month 0. Hi Paul, Will it work if the value didn’t changed. But there might be different timestamp. You could use the same method, you’ll just want to read the tag before you write it to leave the value unchanged. That does introduce a potential race (the value could change between your read and subsequent write) but is likely safe enough. Thank you Paul… Paul, out of curiosity, if this tag we just wrote a value (and timestamp) to is being historized, does the tag historian use this timestamp as the t_stamp to store in the Historian database? Or does the historian use it’s own now() value when inserting upon evaluating? In 7.9 and previous, there were separate notions of timestamp sources, but as far as I know, in 8.0+ the timestamp is basically always calculated in the historian, not ‘trusted’ from the value. Darn. This kindof means I’m out of luck. If I import raw tag data from a CSV file using system.tag.storeTagHistory function, I can specify the timestamp…but I lose out on the deadband optimization the Historian provides so I end up storing way more data than I need. If I instead write to the tag’s value and timestamp, I gain the deadband and alarm functionality, but the historian won’t use my timestamp.
https://forum.inductiveautomation.com/t/can-timestamp-of-a-tag-be-written-to-using-a-script/42444
CC-MAIN-2021-39
refinedweb
451
55.54
When registered with our forums, feel free to send a "here I am" post here to differ human beings from SPAM bots. If trying to compile while debugging in progress, ask to stop debugging (previously it wasn't allowed) - needs testing on windows (1) When trying to compile while debugging, the Information box asks, "Do you want to stop the debugger now?" There are three options: Yes, No and Cancel. What is the difference between No and Cancel? Quote from: rhf on February 07, 2008, 03:07:22 am(1) When trying to compile while debugging, the Information box asks, "Do you want to stop the debugger now?" There are three options: Yes, No and Cancel. What is the difference between No and Cancel?There is no difference. Maybe I should have used a OK/Cancel combination to avoid confusion... - Mac OS X: (10.4 and 10.5 Universal Binary, using bundled wxWidgets 2.8.7) afb, my C::B builds on mac are unusable, because of two bugs, wxDynamicLibrary not working on dylibs, and combos in toolbars not appearing. Both have been confirmed on the wx bug tracker - But your builds are free of them? :? I'm a bit amazed at your magical bug-fixing abilities :lol: --- include/wx/mac/carbon/chkconf.h 2007-05-14 11:09:36.000000000 +0200+++ include/wx/mac/carbon/chkconf.h 2007-05-21 10:59:19.000000000 +0200@@ -55,7 +55,7 @@ */ #ifndef wxMAC_USE_NATIVE_TOOLBAR- #define wxMAC_USE_NATIVE_TOOLBAR 1+ #define wxMAC_USE_NATIVE_TOOLBAR 0 #endif #endif Anyway, good job, as usual Someday, the C::B website should be updated, it says only Linux and Windows are supported (I'm not sure how good mac support is ATM but I think it's usable (apart from some wizards perhaps) QuoteAnyway, good job, as usual Thank you! I haven't been able to do much development, but the monthly builds have been "working" OK. Nice one! :-) I wish I could try MAC-OS someday. Do you see any chance of running it in a VM somehow? I would be willing to buy MAC-OS, but not a MAC - thjis is just by far too expensive (even at eBay) for me. :-(
http://forums.codeblocks.org/index.php?topic=7757.msg58375
CC-MAIN-2019-51
refinedweb
362
65.73
Step 2: Parts List - 2 x servo's - 4 x lightdepending resistors (ldr) - 4 x resistors 10K - 1 x Arduino - 2 x potentiometers 10k (value doesn't matter) Just finished up my take on this tracker. I used continuous rotation servos, so i had to modify the code a bit. Super fun project, thanks geo Bruce! Hi could you send me the schematic for the build and the codes... because i think im going to go with the continuous rotation servos toceers Hi could you send me the schematic for the build and the codes... because i think im going to go with the continuous rotation servos too.... ty Hello is there any chance you could send me the schematic for the build and the codes... because i think im going to go with the continuous rotation servos too.... thank you hi can you please send me same stuff (he schematic for the build and the codes for display ..)to build a solar tracker Thank you Hi could you send me the schematic for the build and the codes... because i think im going to go with the continuous rotation servos too.... ty My wiring and build was pretty much the same as GeoBruces, except for an LCD I added. For your individual motors, the servohstop and servovstop values might need adjusting. If it doesnt work as is, try setting both of these values to 90 and go from there. #include <LiquidCrystal.h> //include LCD Library #include <Servo.h> // include Servo library LiquidCrystal lcd (2, 3, 4, 5, 6, 7); // Servo horizontal; int servohStop = 95; int servohLeft = 105; //move horizontal int servohRight = 85; Servo vertical; // vertical servo int servovStop = 95; int servovUp = 105 ; int servovDown =87;// move vertical servo // LDR pin connections // name = analogpin; int ldrLeftTop = 1; //LDR top left int ldrRightTop = 0; //LDR top right int ldrLeftBottom = 2; //LDR down left int ldrRightBottom = 3; //ldr down right int dtime = 25; //delay time in milliseconds int tolerance = 100; //difference tolerance between ldr readings int count = 0; //millis(); //start millisecond count void setup() { lcd.begin(16, 2); //begin 16x2 LCD Serial.begin(9600); // servo connections // name.attach(pin); horizontal.attach(9); vertical.attach(10); } void loop() { //int count = millis(); //start millisecond count int leftTop = analogRead(ldrLeftTop); // top left int rightTop = analogRead(ldrRightTop); // top right int leftBottom = analogRead(ldrLeftBottom); // down left int rightBottom = analogRead(ldrRightBottom); // down right //print resistance values to serial monitor for debugging Serial.println(leftTop ); Serial.println(rightTop ); Serial.println(leftBottom ); Serial.println(rightBottom ); Serial.println(); count++; int avt = (leftTop + rightTop) / 2; // average value top int avd = (leftBottom + rightBottom) / 2; // average value down int avl = (leftTop + leftBottom) / 2; // average value left int avr = (rightTop + rightBottom) / 2; // average value right int avgTot = (rightTop+rightBottom+leftTop+leftBottom) / 4; int dvert = avt - avd; // check the diffirence of up and down int dhoriz = avl - avr;// check the diffirence og left and right if (count%10 == 1) //mod the millis count so the lcd screen updates every 10 millis { lcd.clear(); //clear the screen and position cursor at top left lcd.print("dvert: "); //difference in vertical resistances read from the LDRs lcd.print(dvert); lcd.setCursor(0,1); //set lcd cursor to bottom left lcd.print("dhoriz: "); //difference in horizontal resistance read from the LDRs lcd.print(dhoriz); } if (-1*tolerance > dvert || dvert > tolerance) // check if the difference is in the tolerance else change vertical angle { if (avt > avd) //if average resistance on top is greater than on bottom { vertical.write(servovUp); } else //avg resistance on bottom is greater than on top { vertical.write(servovDown); } } else if (-1*tolerance < dvert || dvert < tolerance) //if difference is smaller than tolerance, stop vertical servo { vertical.write(servovStop); } if (-1*tolerance > dhoriz || dhoriz > tolerance) // check if the difference is in the tolerance else change horizontal angle { if (avl > avr) { horizontal.write(servohLeft); } else { horizontal.write(servohRight); } } else if (-1*tolerance < dhoriz || dhoriz < tolerance) //if difference in horizontal { horizontal.write(servohStop); } delay(dtime); } Thank you for the answers... so i wounder if your servos are modifyed to make a full spinn... if i do that to my servos would the code you sent me work?. Just finished my solar tracker. Thanks geo bruce for your share ,it helped me a lot in my school project, blecaute of your help i could get 19. Thanks very much. In my project i needed to change some things blecause in the place where i was is very Lightning and i adjust the program to the "arm " escape from the shadow thank very much one more time!!! i want to know what kind of project to i have to create to paste the code can you help me what type arduino will used to make arduino solar tracker?? Uno. Thanks for the reply. Very nice :) Is the value of resistors that you used is necessarily be 10kohms ? Thanks Hello GeoBruce: First time Arduino user and it's been 29 years since I've breadboarded!!! I got this error message. Any help is appreciated. avrdude: stk500_getsync(): not in sync: resp=0x00 A lovely little project to get our S.T.E.M. club thinking - well done. you are freakin' awesome This is my solution. Even added on a little volt meter. I removed the potentiometers and just set the sensitivity via the software. I used a sensor shield in order to remove a bunch of power wires for the servos. I used two terminal blocks to wire everything up, making it non soldering. I'll eventually make up an Arduino shield and a sensor PCB with ribbon cable, which will make the entire thing just plug together. here ya go GEO BRUCE.....I noticed you like Fritzing so i finished out your design and applied it to a Arduino micro. And for anyone who is into DIY electronics, Fritzing is a FREE!!! circuit program crammed full of AWESOME STUFF. -PATRICK Hi im done with the build but it seem like the tracker actually gets away from the light.. i really need help because this is a project that is going to get graded somehow. TY Here's a thought on simplifying the wiring. You could hook up TopLeft LDR and BottomRight LDR in series, and BottomLeft and TopRight LDR in series. have one end connection going to Positive, the other end connection going to negative, the middle connections going to two analog pins. Doing it this way, you: a) dont need any pull up resistors or matching connections on a breadboard for resistors, b) only need two analog inputs and c) only need 4 wires going to the LDR arrangement (Two for power to it, two for analog signals back from it). I love solar tracked projects :D what causes solar tracker that I made insensitive to light ? but when in dark conditions , and given light , solar tracker work ? Will you please give Codings for this project Sir IS this the circuit for the process do you need to use the potentiometers? I made this version up. Still have some tweaks to make before it's finished. It's going to hold a 6V 320mA solar cell and eventually have a little LED Volt Meter attached to it. I may even make it self powering... which would be interesting... <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe>
http://www.instructables.com/id/Arduino-Solar-Tracker/meta-info
CC-MAIN-2015-14
refinedweb
1,215
63.39
Hi, for the second part of this exercise it seems to makes no relation to string_two under the function common_letters. How is that so? deeper explanation about the whole part 2 of this exercise is much appreciated. How to not get duplicate letters? Hi, for the second part of this exercise it seems to makes no relation to string_two under the function common_letters. How is that so? We only need to iterate over one of the strings since we are not looking for uniques in each string, which would mean iterating both strings. We are looking for letters in common. Given a letter (or character) from the one string, we use in to see if it is in the other string. If not, go to the next letter. If it does, append the letter to the return list only if it is NOT present already. I don’t understand how not (letter in common) works in this function. common = for letter in string_one: if (letter in string_two) and not (letter in common) For example, when letter is break down as b, a,n,a,n,a in the (letter in string_one), and in the (letter in string_two), it tries to match up the letter, and will produce [a,a,a]. But I think common has a empty set, and how could it act to remove the duplicate? Or I want to say that how the variable common works in this case. Please advice. not (letter in common) will be True if letter is not found in the common list. The same can be said of this… letter not in common which does not require brackets. NOT will negate a boolean, so not True will be False, and not False will be True. Thank you. Very helpful. You are not the only one. The “and not” really threw me off. I don’t think it was ever mentioned before. It’s good to know what it means now though. So what does it mean? Describe it for us. We can make this a learning moment. Oh I just didn’t know we could use “and not” to combine two conditions in an if statement. If it was mentioned in the Hint section or somewhere in the IF lesson, that’d be great. Thanks for your prompt response! Understood. Let’s walk back a little to where we only know about if and nothing about and or not. if condition: # condtion is truthy if is like an operator that works on its argument. Given any object or expression it will behave the same. Evaluate this in terms of truthiness. Nothing that isn’t truthy will get past this point and will be shunted off to the next claus (elif, else) or the program statement following. Logic is all about truth factors. There are only two, True and, False. That’s it. Every object can be boiled down to one of these two. Call them cases if you will; I do for variation in thinking about logic. it is going to rain tonight Is that truthy, or falsy? Other factors may come into play. Are there clouds on the horizon? Is it forty below zero outside (only thing reining (pun) is cold)? The above may well prove out, but it is only an assertion until then. it rained last night Is that truthy or falsy? Assuming it did rain, we would take it as truthy. if it rained last night: the assertion was correct But in cases where there is doubt, we always need another avenue of testing assertions. Logical operators are a tool we use for this purpose (among others, perhaps). if prediction of rain AND temperature suitable AND cloudy: expect rain Logic is about judgement and determination as opposed to imperative. if prediction: expect rain AND is an operator that short-circuits if it encounters a falsy. The whole thing just shuts down and hands back, False. That’s the end of the evaluation. This tells us that if we have condition that is likely to fail, then put it last and let the others have a chance first, to succeed if they may. x = [] if len(x) and x[0] This will not raise an exception because the doable operation came first. The second operand will never be evaluated. x = [] if x[0] and len(x) Will toss up an exception so quick your hands won’t have left the keys yet. The operator is not blame here. The programmer is. x = [1] if not x[0] and len(x) What will that do? Or this, if len(x) and not x[0] Bottom line, go down this rabbit hole and don’t come back up until you are dreaming about logical operators in your sleep, literally. Arm yourself with simple truth tables for each of AND, OR, NOT. Bear in mind that AND and OR are operators whereas NOT is a modifier. Operators are binary, modifiers are unary. They prefix and sometimes suffix a singular adjacent object. not a First off, whatever a evaluates as, truthy or falsy will become a boolean. The next thing to happen will be toggling that boolean. Given we have only two factors, that is simple enough. True becomes False; False becomes True. Hardly any evaluation process here. It is imperative. But the evaluation of a was determinative. We had to squeeze it to see what came out. Wow thank you!! I’m not sure if you work for Codecademy. If you do, do you know if the company will: - update the content for the Python courses where the prerequisite is Python 2? Do those courses use Python 2 instead of Python 3? - launch more Python courses for Pro membership in the future? Not CC staff; we are volunteers on this side. Contact your advisor or someone on the support team for reliable information with regard to upcoming courses. All we know is that Python 2 will be sunsetting next year, but the content here will remain for a good long time afterward Whether or not any deficiencies will be remedied, one cannot say. oh god me too, im feeling so dumb for not understanding this… i have to look at almost all the anwers, im feeling very disappointed in this one im having trouble in here : if (letter in string_two) and not (letter in common): common.append(letter) -why the: " and not letter in common" ? - also, why is the first part relevant, i deleted and got the same answer? thank you not (letter in common) will be True if letter is not found in the common list. We don’t want any duplicate items in the common list, hence checking first, before appending. Hi guys. I’m stuck on an exercise. The instruction is: Write a function called common_letters that takes two arguments, string_one and string_two and then returns a list with all of the letters they have in common. I require for my code to only return one set of identical letters, not duplicates. def common_letters(string_one, string_two): common = [] for i in string_one: if i in string_two: common.append(i) return common print(common_letters('manhattan', 'san francisco')) #Output in the terminal ['a', 'n', 'a', 'a', 'n'] #Required output in terminal ['a','n'] Solved with the following code: def common_letters(string_one, string_two): common = [] for i in string_one: print(i) if i in string_two: if not i in common: common.append(i) return common This is what i have used to solve this exercise def common_letters(string_one,string_two): a = for i in string_one: if i in string_two and i in string_one and i not in a: a.append(i) return a Don’t we already know it is in string_one? We are, after all, iterating over that string, and it is the current value being examined. Just ribbing you, though it is a fair point. Question: Have you been exposed to list objects, yet? Yes , I finished few exercises on lists Actually, I scrolled to the top of this topic… The lesson is looking for common letters, not common words, so a list is not needed since we are iterating character by character. Common words would be another story. My apologies. I did this one. it worked. def common_letters(string_1, string_2): common = [] for i in string_2: if i in string_1: if i not in common: common.append(i) return common To add on to a previous lesson I tried to move this to a list comprehension. but I have a question about it. in the following example: def common_letters(string_1, string_2): common = [] common = [i for i in string_2 if i in string_1 if i not in common] return common print(common_letters('manhattan', 'san francisco')) is this doing everything in the list comprehension before it gets assigned into the var common??
https://discuss.codecademy.com/t/how-to-not-get-duplicate-letters/436943
CC-MAIN-2019-43
refinedweb
1,466
74.08
Rechercher une page de manuel mmap2 Langue: en Version: 2002-01-31 (fedora - 16/08/07) Section: 2 (Appels système) NAMEmmap2 - map files or devices into memory SYNOPSIS #include <sys/mman.h> #include <syscall.h> #include <errno.h> _syscall6(void *, mmap2, void *, start, size_t, length, int, prot, int, flags, int, fd, off_t, pgoffset) /* Using syscall(2) may be preferable; see intro 4kB units (instead of bytes). This enables applications that use a 32-bit off_t to map larger files (typically up to 2^44 bytes). RETURN VALUEOn success, mmap2() returns a pointer to the mapped area. On error -1 is returned and errno is set appropriately. ERRORS EFAULT - Problem with getting the data from userspace. NOTESmmap2() is available since Linux 2.3.31. It is Linux specific, and should be avoided in portable applications. On 32-bit systems, mmap2() is used to implement the mmap64() function that is part of the LFS (Large File Summit). CONFORMING TOThis system call is Linux specific. SEE ALSOgetpagesize(2), mmap(2), mremap(2), msync(2), shm_open(3) Contenus ©2006-2022 Benjamin Poulain Design ©2006-2022 Maxime Vantorre
https://www.linuxcertif.com/man/2/mmap2/36265/
CC-MAIN-2022-40
refinedweb
183
67.35
Source code: Lib/textwrap.py. Wraps the single paragraph in text (a string) so every line is at most width characters long. Returns a list of output lines, without final newlines. Optional keyword arguments correspond to the instance attributes of TextWrapper, documented below. width defaults to 70.. For example: def test(): # end first line with \ to avoid the empty line! s = '''\ hello world ''' print(repr(s)) # prints ' hello\n world\n ' print(repr(dedent(s))) # prints 'hello\n world\n'. . Note If replace_whitespace is false, newlines may appear in the middle of a line and cause strange output. For this reason, text should be split into paragraphs (using str.splitlines() or similar) which are wrapped separately. (default: True) If true, whitespace that, after wrapping, happens to end up at the beginning or end of a line is dropped (leading whitespace in the first line is always preserved, though). (default: '') String that will be prepended to the first line of wrapped output. Counts towards the length of the first line. . Wraps the single paragraph in text, and returns a single string containing the wrapped paragraph.
http://docs.python.org/release/3.2.1/library/textwrap.html
CC-MAIN-2013-20
refinedweb
184
67.45
Maximum DB Size?745549 Jan 10, 2010 11:24 PM Mark, hello; can you please provide some kind of formula to estimate BDB JE limits? maximum database size? maximum number or records? maximum record size? maximum key size? thank you! Andrei. can you please provide some kind of formula to estimate BDB JE limits? maximum database size? maximum number or records? maximum record size? maximum key size? thank you! Andrei. This content has been marked as final. Show 20 replies 1. Re: Maximum DB Size?Linda Lee-Oracle Jan 11, 2010 7:42 AM (in response to 745549)Andrei, There are no limits on key size, value size, number of records or database size. However, these sizes, along with your access pattern, do affect what resources your application will need. - key and data size impacts how much of the database can fit into cache. We have seen applications with a full range of key and record sizes, ranging from a few bytes to thousands of bytes, to megabytes. The larger the key, the smaller the amount of the internal better that can fit into cache. -how much of the database is "hot" matters more than the size of the database itself. We have seen applications that use databases that range from quite small to terabytes. You may want to look at com.sleepycat.je.util.DbCacheSize to get some guidelines on cache sizing, and to do some estimates of your own. Please also read the getting started guide, in particular this section: Regards, Linda Linda 2. Re: Maximum DB Size?740005 Mar 30, 2010 4:21 PM (in response to 745549)Presumably there is some internal id for a database that isn't its name, and that id has a maximum (or is a variable length encoding?). While I will only have a few thousand databases at any one time, I'll be regularly creating and removing them. May I assume that over the lifetime of an environment I should be able to create and destroy more than 2^32 databases? 3. Re: Maximum DB Size?Charles Lamb Mar 30, 2010 5:38 PM (in response to 740005)Actually, there is a limit of 2^31 non-replicated databases and 2^31 replicated databases. You can subvert that doing a dump and reload to a fresh environment. Charles Lamb 4. Re: Maximum DB Size?740005 Mar 30, 2010 5:45 PM (in response to Charles Lamb)Just to be clear, that 2^31 limit is over the lifetime of an environment, not a limit for the number that exist at any one time? Knowing this, I can still leverage the benefits of truncateDatabase by recycling databases after emptying them. Good to know. 5. Re: Maximum DB Size?Charles Lamb Mar 30, 2010 5:48 PM (in response to 740005)Yes, that is correct. See com.sleepycat.je.dbi.DbTree.lastAllocated*DbId. Charles Lamb 6. Re: Maximum DB Size?Charles Lamb Mar 30, 2010 5:52 PM (in response to Charles Lamb)Ooops. Scratch that. truncate also allocates a new DbId. 7. Re: Maximum DB Size?740005 Mar 30, 2010 6:37 PM (in response to Charles Lamb)Assuming it could be optimized, a ranged delete operation would be a great addition. I want to delete 100 to 10k consecutive records at a time, on a continual basis. For now I'll proceed with a single database and individual record deletes. Thanks. 8. Re: Maximum DB Size?Greybird-Oracle Mar 30, 2010 7:49 PM (in response to 740005)Hi, I'm trying to read between the lines and have concluded that you're using key ranges instead of databases, because you can't create enough databases in total over the lifetime of your app. Correct? Would support for 2^63 databases solve your problem? Not promising anything, just curious. An optimized range deletion is a nice thing, and we should probably do it in the future. But because of JE's architecture I don't think it will ever be nearly as fast as a Database removal or truncation, which is already optimized. --mark 9. Re: Maximum DB Size?Greybird-Oracle Mar 30, 2010 8:07 PM (in response to Greybird-Oracle)Also, what is the average size -- number of records, key/data sizes -- of each data set (what you'd like to store in each Database)? If the average size of a database is extremely small, the per-Database overhead may be a big factor. --mark 10. Re: Maximum DB Size?740005 Mar 30, 2010 8:09 PM (in response to Greybird-Oracle)Yes, 2^63 databases would work. 2^31, given a 3 year hardware lifecycle, and a not unreasonable expected write rate (and therefore expected database turnover) would make using truncateDatabase too risky (within a safety factor of 2x). Alternatively, if the max database id were exposed in stats or something, we could know when it was getting close and initiate an automated re-provisioning process (where the host is wiped and data re-replicated back in). For the purposes of discussion our records are keyed by "writer id" + "per writer sequence number". After accumulating so much data per writer, it gets moved elsewhere (out of BDB) and deleted, while additional writes happen at the tail. Deleting that as efficiently as possible is preferred. Given the current apis, that translates to using optimally 2 databases per "writer id" (the one we just overflowed and will soon delete, and the one we are now writing to). However, another concern was the FAQ entry about checkpoint overhead (more than a magnitude worse) when using multiple databases. Our use case would have multiple writes to each db, so wouldn't be as pathological. I was going to write some test code for that scenario to see what it looks like. 11. Re: Maximum DB Size?740005 Mar 30, 2010 8:20 PM (in response to Greybird-Oracle)Key size: 32 bytes Avg data size: ~150 bytes Avg records per database: ~5000 12. Re: Maximum DB Size?Charles Lamb Mar 30, 2010 9:29 PM (in response to 740005)We could easily expose the max database id in the stats. I will also give you a quick and dirty hack to obtain it from the Environment, although we would not guarantee that it would be a supported api in future releases. import com.sleepycat.je.DbInternal; Environment env = ...' DbInternal.getEnvironmentImpl(env).getDbTree().getLastLocalDbId() should do the job for you. Charles Lamb 13. Re: Maximum DB Size?Greybird-Oracle Mar 30, 2010 10:27 PM (in response to 740005)I'm not sure whether the small size of your databases, and the per-database overhead including checkpointing, will outweigh the advantages of database removal over record removal. You're wise to test this for your particular parameters. --mark 14. Re: Maximum DB Size?740005 Mar 30, 2010 10:34 PM (in response to Charles Lamb)Nice. That will let us monitor it, and engineer a work around if need be. I'm passing this info around internally, as I know at least one other team was considering doing something similar. Is the checkpoint overhead correlated with the number of databases that ever existed, that exist now, or that had activity since the last checkpoint?
https://community.oracle.com/message/4202912
CC-MAIN-2015-48
refinedweb
1,212
66.13
New in 5.5: Provide your classes from App_Code Martin Hejtmanek — Jun 2, 2010 cmscustomizationkenticokentico cms 5.5 5.5 is out for some time and you sure deserve to know what is new. You sure know about the standard features you can see, but there are also some API improvements or things that haven't yet been in the Beta version. So here is the first one of them ... Hi there, There are certain tasks, that require you to do more than you actually want for quite a simple actions. One of them is defining your custom Scheduled task. The scheduled task requires you to provide the assembly name and class name so it can load the class dynamically and execute it. This clearly results in you having to open Visual studio, add new class library, and implement the class in there, taking care about the references, and recompilation of the project if a hotfix is applied. It is kind of too complicated if you ask me. That is why 5.5 offers better, alternative solution to it. Providing class from App_Code As the title indicates, in 5.5 there is a way how you can provide your custom classes (of any kind) from the App_Code folder. There isn't any DLL you could refer to for the App_Code of your web site project, but now there is a handler to take care of this. Scheduled task configuration then looks like this: Task assembly name: App_Code (case sensitive) Task class name: Custom.MyTask (any selector that you handle and provide the task class) Now when we have the task defined, we can look how this is handled in the App_Code so you can provide the class object. Open the file ~/App_Code/Global/CMS/CMSCustom.cs, you can see this code in there (I removed comments to make it shorter): public static object GetCustomClass(string className) { switch (className) { case "Custom.MyTask": return new MyTask(); } return null; } public class MyTask : ITask { public string Execute(TaskInfo ti) { EventLogProvider ev = new EventLogProvider(); ev.LogEvent(EventLogProvider.EVENT_TYPE_INFORMATION, DateTime.Now, "MyTask", "Execute", null, "This task was executed from '~/App_Code/Global/CMS/CMSCustom.cs'."); return null; } } As you can see, there is a handler that you can implement to provide the custom objects. It is basically called with the class name you enter to the task which in our case is "Custom.MyTask". And since you know this targets the scheduled task, you provide a new object with the ITask interface just as you do in your DLL. This particular sample task just logs it's execution to the Kentico CMS event log so you can see that it really works. Yours can do just anything. Where can it be used? You can use this anywhere in the system where you provide the assembly name (such as Notification/Payment gateways, etc.), if you put in App_Code, it will just ask for the class through the handler instead of loading the DLL. Advantages of this approach The advantages are pretty clear, but just to summarize: No need to recompile your DLLs after applying hotfix or upgrade No need to recompile any DLL if you change the task code Able to provide different task handlers based on the context of execution (e.g. for each web farm server / site / etc.) And that is pretty much all for today, just as in previous post, this will also be used in our simple Chat module I will elaborate on later. Jan 26, 2011 Hi,The <add key="CMSCustomEcommerceProviderAssembly" value="App_Code" /> tag is not neccesary in this case, it is related to something else.Does your switch in GetCustomClass have case for "App_Code.CustomProvider.CustomGateway"? Could you please post the GetCustomClass method code? vlad-inorbital commented on Jan 26, 2011 Hi Martin, I am trying to incorporate the example in I receive this error "[SchedulingExecutor.ExecuteTask]: Cannot load provider class 'CMS.CustomProvider.CustomGateway' from assembly 'App_Code'."here were my steps, i created the CustomGateway.cs, moved it to the App_Code/Global/ folder.I added a task with these parameters.Task display name: CustomGatewayTask name: CustomGatewayTask assembly name: App_CodeTask class name: App_Code.CustomProvider.CustomGatewayand enabled it.I tried adding the <add key="CMSCustomEcommerceProviderAssembly" value="App_Code" /> tag to webconfig. I'm sure I messed something up along the way, but I'm not sure how to continue. Martin Hejtmanek commented on Nov 9, 2010 Hi John,Just use the way of combining CS and VB in one project, you basically have two options then:1) Rewrite the CMSCustom to VB and place it to VB folder2) Keep the current CMSCustom, just implement the switch in CS, and implement the rest in VB in other classI will make a blog post about it later if I have enough time ... John commented on Nov 9, 2010 Hi Martin,Could you please do me a huge favor and give an example of this in VB.NET? John commented on Jun 24, 2010 I forgot <add key="CMSUseCustomHandlers" value="true"/>NOW IT WORKS!Thanks Martin Hejtmanek commented on Jun 24, 2010 Did you also enable the custom handlers with this?<add key="CMSUseCustomHandlers" value="true"/> John commented on Jun 24, 2010 Hi,thanks for your efforts to help me.In Web.config in AppSettings: <add key="CMSCustomHandlersAssembly" value="App_Code"/>My code in CMSCustom.cs (I left out using):public static class CMSCustom{ public static void Init() { MacroResolver.OnResolveCustomMacro += new MacroResolver.MacroHandler(ResolveCustomMacro); ClassHelper.OnGetCustomClass += new ClassHelper.GetClassEventHandler(GetCustomClass); } /// <summary> /// Custom macro handler /// </summary> /// <param name="sender">Sender (active macro resolver)</param> /// <param name="expression">Expression to resolve</param> /// <param name="match">Returns true if the macro matches (was resolved)</param> public static string ResolveCustomMacro(MacroResolver sender, string expression, out bool match) { match = false; string result = expression; // Add your custom macro evaluation /* switch (expression.ToLower()) { case "someexpression": match = true; And implement all classes such as "App_Code.CustomOrderInfoProvider" in the App_Code.However, I do not recommend to do that just yet, because 6.0 will provide much shoother way of e-commerce customization that will also be possible to do from App_Code. Patrick D'Souza commented on Jun 7, 2010 Can you comment on using a similar approach for custom ecommerce providers? replica watches commented on Jun 4, 2010 It is a very nice stuff you are sharing us! Thanks for it ! Jeroen Fürst commented on Jun 3, 2010 Oh haha ofcourse, that did the trick :) Thanks Martin!!! Martin Hejtmanek commented on Jun 3, 2010 Hi Jeroen,Sure, as I said, in such case you need to implement them all in there. In case there is no option to insert specific class name, the class name is joined with the assembly name, so in this case it is "App_Code.CustomSecurityHandler". Your implementation should look like this:public static object GetCustomClass(string className){ switch (className) { case "App_Code.CustomDataHandler": return new CustomDataHandler(); case "App_Code.CustomSecurityHandler": return new CustomSecurityHandler(); ... } return null;}public class CustomSecurityHandler : AbstractSecurityHandler{ public override object OnAuthentication(object userInfo, string username, string password) { return userInfo; } ...}I tried that and it is working like a charm. Jeroen Fürst commented on Jun 3, 2010 Hmmm getting a lot of: The class CustomSecurityHandler couldn't be loaded exception's when moving the classes to App_Code... Any thoughts? Martin Hejtmanek commented on Jun 2, 2010 Hi John, I think this should be able to cover this too since all the code that loads external classes is handled centrally, try to adding this to your web.config:<add key="CMSCustomHandlersAssembly" value="App_Code"/>Check if the GetCustomClass if called, and let us know if it worked. Note that in this case you should provide all the classes available in the current library.To be honest, the overall customization using the entire compiled libraries must be seriously revamped, because if you want to override just one method, it just gets too complicated. It is part of our plans already. John commented on Jun 2, 2010 We would like to see the same approach for custom event handling!Thanks Ivan Robalino commented on Jun 2, 2010 Thanks Martin. I've been waiting this for a long time. Our implementation has several custom scheduled tasks and it was cumbersome to apply any hotfix since I had to recompile the DLLs, etc. Martin Hejtmanek commented on Jun 1, 2010 Great to hear. If there are any other things you are looking for time to time, definitely let me know, they may already be there or can be done occasionally in new versions. Jeroen Fürst commented on Jun 1, 2010 Thanks Martin, exactly what we are looking for! New subscription Leave message Your email:
https://devnet.kentico.com/articles/new-in-5-5--provide-your-classes-from-app_code
CC-MAIN-2017-34
refinedweb
1,433
55.44
You are given N qubits (1 ≤ N ≤ 8) in zero state . You are also given a bitstring bits which describes a non-zero basis state on N qubits . Your task is to generate a state which is an equal superposition of and the given basis state: You have to implement an operation which takes the following inputs: The operation doesn't have an output; its "output" is the state in which it leaves the qubits. An array of boolean values represents a basis state as follows: the i-th element of the array is true if the i-th qubit is in state , and false if it is in state . For example, array [true; false] describes 2-qubit state , and in this case the resulting state should be . Your code should have the following signature: namespace Solution { open Microsoft.Quantum.Primitive; open Microsoft.Quantum.Canon; operation Solve (qs : Qubit[], bits : Bool[]) : () { body { // your code here } } }
http://codeforces.com/problemset/problem/1002/A2
CC-MAIN-2018-30
refinedweb
155
69.82
How to check if a file exists in Deno In this tutorial, we are going to learn about how to check if a file or directory exists in Deno using the fs module. The exists() method We can use the exists() method to check if a file exists or not in the file system. The exists() method takes the file path as an argument and returns the promise boolean where result value is true if a file exists or result value is false if a file doesn’t exist. The exists() method works asynchronously. import { exists} from ""; exists("./math.js").then((result) => console.log(result)); To run the above code, we need to allow deno to read the ./math.js file by using --allow-read flag. deno run --unstable --allow-read=./math.js app.js Output: true Note: All the methods inside an fs module are currently unstable, so that we have used the --unstableflag to enable it during the runtime. The existsSync() method The existsSync() method is used to check the file existence synchronously. The existsSync() method takes the file path as an argument and returns true if a file exists else it returns false. import {existsSync } from ""; if (existsSync("./math.js")) { console.log("file is found"); } else { console.log("file is not found"); } deno run --unstable --allow-read=./math.js app.js Output: file is found Checking the directory exists Similarly, you can also use the above methods to check if a directory exists or not. Example: import {exists, existsSync} from ""; // asynchronous exists("./images").then((result) => console.log(result)); // synchronous if (existsSync("./images")) { console.log("file is found"); } else { console.log("file is not found"); } In the above code, we are checking for a ./images directory existence in a file system.
https://reactgo.com/deno-check-file-exists/
CC-MAIN-2021-17
refinedweb
293
58.28
You may want to randomize a list of values in a Python program in order to, for example, teach a student how to reorganize them properly or to create a search game where some values in the list are hidden among several others. Python's list methods let you manipulate lists, and the random class lets you select random items from one list which you can append to another to create a random list in your program. Open a Python program file. Insert the cursor at the top line. Type the following code: import random myList = [1, 2, 3, 4, 5] The first line imports the random class, which lets you get random values. The second line creates a list with the numbers one to five in order. Related Reading: Can I Run Python From Notepad? Type the following code: def randomList(a): b = [] for i in range(len(a)): element = random.choice(a) a.remove(element) b.append(element) return b This creates a function called "randomList" that takes a list as a parameter. It creates a temporary list to hold the randomized items. While the list passed to the function still has items in it, the for statement randomly selects one, removes it from that list, then appends it to the end of the temporary list. When all of the items in the original list have been moved, the function returns the randomized list. Save and run the program.
http://smallbusiness.chron.com/randomize-list-python-26724.html
CC-MAIN-2014-52
refinedweb
241
70.33
Learn conditional logic with branch and loop statements This tutorial teaches you how to write code that examines variables and changes the execution path based on those variables. You write C# code and see the results of compiling and running it. The tutorial contains a series of lessons that explore branching and looping constructs in C#. These lessons teach you the fundamentals of the C# language. This tutorial expects you to have a machine you can use for development. The .NET tutorial Hello World in 10 minutes has instructions for setting up your local development environment on Windows, Linux, or macOS. A quick overview of the commands you'll use is in the Become familiar with the development tools with links to more details. Make decisions using the if statement Create a directory named branches-tutorial. Make that the current directory and run the following command: dotnet new console -n BranchesAndLoops -o . This command creates a new .NET Core console application in the current directory. Open Program.cs in your favorite editor, and replace the line Console.WriteLine("Hello World!"); with the following code: int a = 5; int b = 6; if (a + b > 10) Console.WriteLine("The answer is greater than 10."); Try this code by typing dotnet run in your console window. You should see the message "The answer is greater than 10." printed to your console. Modify the declaration of b so that the sum is less than 10: int b = 3; Type dotnet run again. Because the answer is less than 10, nothing is printed. The condition you're testing is false. You don't have any code to execute because you've only written one of the possible branches for an if statement: the true branch. Tip As you explore C# (or any programming language), you'll make mistakes when you write code. The compiler will find and report the errors. Look closely at the error output and the code that generated the error. The compiler error can usually help you find the problem. is skipped. This process of checking conditions and executing statements based on those conditions is very powerful. Make if and else work together To execute different code in both the true and false branches, you create an else branch that executes when the condition is false. Try this. Add the last two lines in the code below to your Main method (you should already have the first four): to handle both a true and a false condition. is not significant, you need to use { and } to indicate when you want more than one statement to be part of the block that executes conditionally. C# programmers typically use those braces on all if and else clauses. The following example is the same as the one you just created. Modify your code above to match the following code:. Add the following code in your Main method after the code you've written so far:". Add the following code after what you've written so far:. You've finished the first step. Before you start the next section, let's move the current code into a separate method. That makes it easier to start working with a new example. Rename your Main method to ExploreIf and write a new Main method that calls ExploreIf. When you have finished, your code should look like this: using System; namespace BranchesAndLoops { class Program { static void ExploreIf() { int a = 5; int b = 3; if (a + b > 10) { Console.WriteLine("The answer is greater than 10"); } else { Console.WriteLine("The answer is not greater than 10"); } int c = 4; if ((a + b + c > 10) && (a > b)) { Console.WriteLine("The answer is greater than 10"); Console.WriteLine("And the first number is greater than the second"); } else { Console.WriteLine("The answer is not greater than 10"); Console.WriteLine("Or the first number is not greater than the second"); } if ((a + b + c > 10) || (a > b)) { Console.WriteLine("The answer is greater than 10"); Console.WriteLine("Or the first number is greater than the second"); } else { Console.WriteLine("The answer is not greater than 10"); Console.WriteLine("And the first number is not greater than the second"); } } static void Main(string[] args) { ExploreIf(); } } } Comment out the call to ExploreIf(). It will make the output less cluttered as you work in this section: //ExploreIf(); The // starts a comment in C#. Comments are any text you want to keep in your source code but not execute as code. The compiler does not generate any executable code from comments. Use loops to repeat operations In this section you use loops to repeat statements. Try this code in your Main method: int counter = 0; while (counter < 10) { Console.WriteLine($"Hello World! The counter is {counter}"); counter++; } The while statement checks a condition and executes the statement or statement block following the while. It repeatedly checks changes to false as you execute the code. Otherwise, you create an infinite loop where your program never ends. That is not demonstrated in this sample, because you have to force your program to quit using CTRL-C or other means. The while loop tests the condition before executing the code following the while. The do ... while loop executes the code first, and then checks the condition. The do while loop is shown in the following code: int counter = 0; do { Console.WriteLine($"Hello World! The counter is {counter}"); counter++; } while (counter < 10); This do loop and the earlier while loop produce the same output. Work with the for loop The for loop is commonly used in C#. Try this code in your Main() method: for (int index = 0; index < 10; index++) { Console.WriteLine($"Hello World! The index is {index}"); } This does the same work as the while loop and the do loop you've already used. The for statement has three parts that control how it works. The first part is the for initializer: int index = 0; declares that index is the loop variable, and sets its initial value to 0. The middle part is the for condition: index < 10 declares that this for loop continues to execute as long as the value of counter is less than 10. The final part is the for iterator: index++ specifies how to modify the loop variable after executing the block following the for statement. Here, it specifies that index. You should get 63 for an answer. You can see one possible answer by viewing the completed code on GitHub. You've completed the "branches and loops" tutorial. You can continue with the Arrays and collections tutorial in your own development environment. You can learn more about these concepts in these topics: Feedback
https://docs.microsoft.com/en-us/dotnet/csharp/tutorials/intro-to-csharp/branches-and-loops-local
CC-MAIN-2020-10
refinedweb
1,110
73.98
Hi All, Fairly new to Python - got a module running a server on a designated port on the localhost. However, when I stop the server and try to run it again I get the following error: socket.error: (98, 'Address already in use') Unless, I choose a different port the error does not go away. My code is: import socket import os ip = 'localhost' port = 3458 mySocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) mySocket.bind((ip,port)) mySocket.listen(1) while True: channel,details = mySocket.accept() print 'Connection established with', details channel.send('Connection established to server\n') channel.send('Logging please wait...\n') os.popen('netstat -an>log.txt') channel.send('Done.\n') channel.close() Any ideas please?
http://forums.devshed.com/python-programming/229608-sockets-address-error-last-post.html
CC-MAIN-2014-52
refinedweb
119
52.56
- Headline Python Context Manager and the With Statement - Date - October 11th, 2010 - Category - Developer - Story Context managers are a great way to “loan” out objects. In conjunction with python’s “ with” statement, this can be quite the powerful design pattern. You will see ways to automatically clean up entire objects or parts of objects once they are no longer needed. Below we create an object to be used temporarily, “loan” it out, then delete it or alter the object’s state in some way once we are finished with it. Sample Code from contextlib import contextmanager import random class SomeObject(): """ Some Random class. """ def __init__(self): self.id = random.randint(1000,9999) self.big_list = [i*1000 for i in range(1000)] print 'Initialize SomeObject. ID: {0}'.format(self.id) def __del__(self): print 'Calling destructor for ID: {0}'.format(self.id) self.big_list = None @contextmanager def create_some_object(): """ Simple method to create and yield a new object. """ so = SomeObject() try: print 'yield SomeObject. ID: {0}'.format(so.id) print so.big_list # prints really big list yield so finally: print so.big_list # prints really big list print 'Clean up SomeObject. ID: {0}'.format(so.id) so.__del__() print so.big_list # prints None since the destructor was # called and the memory was cleaned up # Now lets do something with the objects. with create_some_object() as obj: # do something print 'Do something with SomeObject ID: {0}'.format(obj.id) Output Initialize SomeObject. ID: 9750 yield SomeObject. ID: 9750 {PRINTS BIG_LIST. Omitted for purpose of saving space.} Do something with SomeObject ID: 9750 {PRINTS BIG_LIST. Omitted for purpose of saving space.} Clean up SomeObject. ID: 9750 Calling destructor for ID: 9750 None Calling destructor for ID: 9750 # Also called when the program exits You’ll see for the demo purposes, I create SomeObject through the create_some_object()method. When the object is initialized, randomly generate a big_listthat I only want to keep around for a short time. Then once I’m finished with the big_listportion of the object, I set it to None to free up the memory. So how can you benefit from this design pattern? One practical use for context managers are for testing purposes. When you run your tests there will often be times where objects are created and you want your tests to clean up after themselves. With the method above, you can wrap your objects with the try finally logic and delete your object in the finally block once the test is finished with the object. That way when you run your tests your random test objects will be deleted from your database. Sample Flow def create_person(): p = Person() p.id = 1 try: yield p finally: p.delete() with create_person() as p: self.assertEquals(1, p.id) Assuming you’ve implemented a delete()method for your Person()object, the object will get deleted from your database when it’s no longer being used. This pattern will ensure cleanup of your object created during testing. Note: when working with context managers be aware of memory usage. This would not be a good design pattern for load testing where you want to create a lot of objects and them all cleaned up. Your script will slow down immensely the more objects are created because they are all kept in memory and “loaned” out until they are no longer needed (you will likely need a very large number of items before this starts to slow down). There are many different ways the functionality above can assist you. This is really just a simple introduction to what you can accomplish with python’s context managers and “with” statement.
http://blog.troygrosfield.com/2010/10/11/python-context-manager-and-the-with-statement/
CC-MAIN-2018-17
refinedweb
602
66.13
Getting The Most Out of XML and Exploring Anko Layouts for Android specific aspect of an application, our team chose to give Anko Layouts a try. Theoretically, domain-specific languages should be understandable enough that non-technical people can read and understand them. This was a core reason why we started looking into Anko Layouts. On the inconvenience side, the Anko Layouts wiki states that XML is: - Not typesafe - Not null-safe - Forcing you to write almost the same code for every layout you make - Is parsed on the device, wasting CPU time and battery - Doesn’t allow code reuse Anko Layouts were actually part of our production code for a short period of time last year. Due to some problems with blank screens on older operating systems, we decided to revert back to XML in that case. It was now time to revisit history. About the Wayfair App You might already be familiar with Wayfair’s Android app – if not, our previous look into Kotlin and JaCoCo gives a good overview. To paraphrase, we have weekly releases to support what might be characterized as your usual e-commerce app, with our architecture a hybrid of VIPER and Clean. This combination allows for a separation of responsibilities between “layers” and increases the code coverage of our unit tests. For the purposes of our Anko Layouts investigation, we’re using a simplified version of our production code. The actual code is provided at the bottom of the article. Let’s get started. Diving into Anko Layouts for a Second Time As I mentioned, we had previously dabbled with Anko Layouts but returned to XML in that instance. Here, we decided to give Anko Layouts a second try: We built two simple apps, one using Anko Layouts, the other using XML. We used this approach in order to create realistic benchmarks for future consideration and use. The Layout is Code With Anko, you create UI programmatically by using DSLs, which should bring about all it’s noted advantages. However, If you aren’t careful, nothing prevents you from adding other responsibilities into this code. That would be against the single responsibility principle. In contrast, XML prevents you from adding any kind of code – this is great! The layout should only define a structure for a user interface in your app without any additional logic. Out of the box, there is a clear separation of concerns. Time for a warning: When using Anko, your code could crash if you need to support older versions of Android’s operating system. In order to define margins, you would have to do the following: In this example, your IDE would complain if you don’t define correct margins for different OS versions, but with this, your code could get ugly: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) { marginStart = dip(16) } else { leftMargin = dip(16) } In the examples your IDE wouldn’t complain, but the app will crash with an older OS – if you aren’t careful enough you could run into production crashes with older Android versions: if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) { foreground = ctx.getDrawableFromAndroidAttr(android.R.attr.selectableItemBackground) } …… relativeLayout() { val txv = textView() { id = R.id.profile_text_viewholder textSize = 6f } if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1) { txv.lparams(width = wrapContent, height = wrapContent) { alignParentStart() } } else { txv.lparams(width = wrapContent, height = wrapContent) { alignParentLeft() } } } Code ugliness is something we can control by using extension functions, but IDEs are not yet smart enough to prevent us from causing errors like this. Preview in Design Time A very important aspect of UI development time is a preview. Developers should be able to access a preview, in a speedy way, of all the changes they’ve made without building the whole project over again. We achieve this with the XML Preview Tool from Android Studio, but unfortunately, this functionality isn’t available for Anko Layouts. There is an Anko Layout plugin available, but we found it very unstable and incredibly difficult to use. At the moment of writing we’d been using Android Studio 3.2.1, and the Anko Layouts plugin was broken at the time. Also, we couldn’t find a way to use the tools namespace for Anko Layouts, so the preview option isn’t so great in design time; it can be frustrating when you have to run your code every time when you want to see the UI changes you’ve implemented. Impact on CPU Time and Battery On the Wiki page for Anko Layouts there is a claim that “XML is parsed on the device, wasting CPU time and battery”. Using Anko sounded like a great improvement to delivering the best user experience in our app by saving the user battery. We took all of our measurements on the Android Emulator Pixel 2 API 26. We compared this with Android Profile CPU usage in between the two versions of layouts. Our finding was that there was no big difference in CPU usage, however, some slightly better performance with XML Layouts. XML Layouts: Anko Layouts: We also measured the time needed for the onCreateViewHolder method to execute inside the RecyclerView adapter. Anko: override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ListViewHolder { val timings = TimingLogger(HomeRecyclerAdapter::class.java.simpleName, "onCreateViewHolder") val itemView = AnkoViewHolderListUI().createView(AnkoContext.create(parent.context, parent)) timings.addSplit("onCreateViewHolder done") timings.dumpToLog() return ListViewHolder(itemView) } XML: override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): ListViewHolder { val timings = TimingLogger(HomeRecyclerAdapter::class.java.simpleName, "onCreateViewHolder") val itemView = LayoutInflater.from(parent.context).inflate(R.layout.image_text_view_holder, parent, false) timings.addSplit("onCreateViewHolder done") timings.dumpToLog() return ListViewHolder(itemView) } All of the measurements we took were very similar, as noted below: This data seems to point in favor of Anko as being faster. It seems that Anko beats XML in fifteen cases, and XML beats Anko in four cases. When Anko does take longer, however, it does so for a specific reason that we have yet to be able to investigate. The graphical representation of this data is below: The results on first glance look as though XML Layouts have much better performance, more than twice over, in fact. If we ignore the best and the worst results, the numbers will be in favor of Anko. We are not exactly sure why Anko sometimes takes a longer time, but it looks like Anko Layouts are slightly faster than XML. We hope to investigate this further in the future. Build Time As you already know, a slower build time means using more of a developer’s hours. Build time is important to us, and we’re always trying to save some time in the build process. As part of our investigation, our team also compared build times for these two implementations. Low and behold, the Anko version turns out to be slightly slower. We looked at clean builds versus incremental ones in order to compare and take note. Clean builds: Incremental builds: Our Conclusion Here at Wayfair, we are very curious about how DSLs will be used in the coming years. We are excited that with Kotlin you can create your own DSL, allowing you faster development and more readable code across teams, whether team members are technical or not. With Anko Layouts, we can gain a variety of improvements: View rendering time is shorter, plus less CPU and battery usage. However, we’re also concerned about Anko Layouts due to the following: - The separation of concerns for your layout is not clear - There are problems with UI previews at development time - It looks like build time would really go up for bigger projects - Finally, there are issues with older OS versions The above considerations are why we wouldn’t replace XML with Anko Layouts at this moment in time for production projects. It was great to explore this possibility; perhaps using Anko Layouts in places where you have to do so programatically could work – we’ll keep the blog updated as we continue to experiment. We can’t wait to see how this will evolve in the future! For a look at the projects we used, see the links below – thanks for reading! Userlist XML Userlist Anko Responses June 16th, 2019 It’s worth mentioning that anko github seems to be dead for quite some time. Still there are no artifacts that are compatible with AndroidX. And since the announcement of Jetpack Compose – I don’t think it’s reasonable to invest in anko June 17th, 2019 To be honest, we did the Anko Layout investigation some time ago and felt now was the appropriate time to share our results. Also, now with Declarative UI Patterns being announced, it is even more interesting to share our findings. We are looking forward to investigating more about Jetpack Compose. August 1st, 2019 The Layouts were actually part of our production code for a short period of time last year. Due to some problems with blank screens on older operating systems, we decided to revert back to XML in that case. It was now time to revisit history. I also deal with Tv repair services and solutions. For any kind of facing a problem regarding this contact us-
https://tech.wayfair.com/2019/06/getting-the-most-out-of-xml-and-exploring-anko-layouts-for-android/
CC-MAIN-2020-29
refinedweb
1,525
53
Detailed Description This namespace groups all classes related to the Client module. The main entry point into the KWayland::Client API is the ConnectionThread class. It allows to create a Wayland client connection either in a native way or wrap a connection created by the QtWayland QPA plugin. KWayland::Client provides one the one hand a low-level API to interact with the Wayland API, on the other hand an easy to use convenience API. Each class directly relates to a low-level Wayland type and allows direct casting into the type. On the convenience side KWayland::Client allows easy creation of objects, signals emitted for Wayland events and easy conversion from Qt to Wayland types. Once one has a ConnectionThread created, it's possible to setup a Registry to get a listing of all registered globals. For each global the Registry provides a convenience method to create the resource. - See also - ConnectionThread - Registry Documentation copyright © 1996-2019 The KDE developers. Generated on Sun Oct 13 2019 03:23:07 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kwayland/html/namespaceKWayland_1_1Client.html
CC-MAIN-2019-43
refinedweb
189
55.24
The following table describes the common Swift data types. The following code how to use these data types. let s:String = "Hey There" let i:Int = -5 let f:Float = 3.14 let d:Double = 9.9 let b:Bool = true Swift compiler can figure data type out based on what value you assign to the variable or constant. This is called type inference. This means you could have omitted the :String from your declarations and instead use something as follows. var s3 = "java2s.com" The following code uses the println function to output string to console. import Foundation println("Hello, World!") The following code shows how to print Variables. import Foundation let s:String = "Hey There" println(s) println() prints something out and then goes to the next line, print() will print out a value without moving to the next line in the console window. import Foundation print("Print this") print(" and ") print("that") We can add an alternative name for existing types using type aliases. The following code uses a type alias to define Note as an alternative to String. typealias Note = String var n1:Note = "hi" You can treat n1 as a string.
http://www.java2s.com/Tutorials/Swift/Swift_Language/0020__Swift_Data_Types.htm
CC-MAIN-2018-13
refinedweb
196
76.11
A neural network library built on top of TensorFlow for quickly building deep learning models. Project description A neural network library built on top of TensorFlow for quickly building deep learning models. Usage nn.Tensor is the core data structure which is a wrapper for tf.Tensor and provides additional functionality. It can be created using the nn.tensor() function: import nn a = nn.tensor([1, 2, 3]) assert isinstance(a, nn.Tensor) assert a.shape == (3, ) It supports method chaining: c = a.square().sum() assert c.numpy() == 14 and can be used with tf.Tensor objects: import tensorflow as tf b = tf.constant(2) c = (a - b).square().sum() assert c.numpy() == 2 It can also be used with high level APIs such as tf.keras: model = nn.Sequential([ nn.Dense(128, activation='relu'), nn.Dropout(0.2), nn.Dense(10) ]) y = model(x) assert isinstance(y, nn.Tensor) and to perform automatic differentiation and optimization: optimizer = nn.Adam() with nn.GradientTape() as tape: outputs = model(inputs) loss = (targets - outputs).square().mean() grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) To use it with ops that expect tf.Tensor objects as inputs, wrap the ops using nn.op(): mean = nn.op(tf.reduce_mean) c = mean(a) assert isinstance(c, nn.Tensor) maximum = nn.op(tf.maximum, binary=True) c = maximum(a, b) assert isinstance(c, nn.Tensor) or convert it to a tf.Tensor object using the tf() method or nn.tf() function: b = a.tf() assert isinstance(b, tf.Tensor) b = nn.tf(a) assert isinstance(b, tf.Tensor) Installation Requirements: - TensorFlow >= 2.0 - Python >= 3.6 Install from PyPI (recommended): pip install nn Alternatively, install from source: git clone cd nn pip install -e . TensorFlow should be installed separately. Testing To run tests, install dependencies: pip install -e .[tests] and run: pytest tests Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/nn/
CC-MAIN-2021-39
refinedweb
339
56.32
#949 – Add a Custom Dictionary for Spell Checking in a TextBox November 13, 2013 2 Comments The built-in spell checker in a TextBox uses a predefined dictionary to look up words. This means that there may be words flagged as misspelled because they are not in the dictionary. You can add a custom dictionary to WPF’s spell checker by creating a lexicon (.lex) file. To create a custom dictionary, start by creating a .lex file in your project. Add any custom words to your dictionary, one line at a time. Set the Build Action of the .lex file to Resource. In the .xaml file containing the TextBox element, define a namespace that refers to the system assembly. xmlns:sys="clr-namespace:System;assembly=system" And set the SpellCheck.CustomDictionaries element using a Uri to refer to your dictionary. <TextBox Name="txtMyText" Margin="5" Height="100" TextWrapping="Wrap" VerticalScrollBarVisibility="Auto" SpellCheck. <SpellCheck.CustomDictionaries> <sys:Uri>pack://application:,,,/MyDictionary.lex</sys:Uri> </SpellCheck.CustomDictionaries> </TextBox> The spell checker now recognizes your custom words. Pingback: Dew Drop – November 13, 2013 (#1666) | Morning Dew So now the spellchecker will recognize the words in this dictionary, and include them in the suggestions.
https://wpf.2000things.com/2013/11/13/949-add-a-custom-dictionary-for-spell-checking-in-a-textbox/
CC-MAIN-2018-43
refinedweb
199
58.69
- Design goals - Integration - Examples - Supported compilers - License - Thanks - Used third-party tools - Projects using JSON for Modern C++ - Notes - Execute unit tests Design goals’ll know what I mean. Trivial integration. Our whole code consists of a single header file json.hpp. That’s it. No library, no subproject, no dependencies, no complex build system. The class is written in vanilla C++11. All in all, everything should require no adjustment of your compiler flags or project settings. - Serious testing. Our class is heavily unit-tested and covers 100% of the code, including all exceptional behavior. Furthermore, we checked with Valgrind and the Clang Sanitizers that there are no memory leaks. Google OSS-Fuzz additionally runs fuzz tests agains all parsers 24/7, effectively executing billions of tests so far. json.hpp is the single required file in single_include/nlohmann or released here. You need to add #include <nlohmann/json.hpp> // for convenience using json = nlohmann::json; to the files you want to process JSON. Package Managers . If you are using the Meson Build System, then you can wrap this repository as a subproject. If you are using Conan to manage your dependencies, merely add jsonformoderncpp/[email protected]/stable to your conanfile.py‘s requires, where x.y.z is the release version you want to use. Please file issues here if you experience problems with the packages. If you are using Spack to manage your dependencies, you can use the nlohmann_json package. Please see the spack project for any issues regarding the packaging. If you are using hunter on your project for external dependencies, then you can use the nloh/[email protected]. Also, the multiple header version can be installed by adding the -DJSON_MultipleHeaders=ON flag (i.e., cget install nlohmann/json -DJSON_MultipleHeaders=ON). If you are using CocoaPods, you can use the library by adding pod "nlohmann_json", '~>3.1.2' to your podfile (see an example). Please file issues here. Here are some examples to give you an idea how to use the class. Assume you want to create the JSON object { "pi": 3.141, "happy": true, "name": "Niels", "nothing": null, "answer": { "everything": 42 }, "list": [1, 0, 2], "object": { "currency": "USD", "value": 42.99 } } With this library, type"}, {"value", 42.99} }); Serialization / Deserialization To/from strings You can create a JSON value explicitly auto j3 = json::parse("{ "happy": true, "pi": 3.141 }"); You can also get a string representation of a JSON value (serialize): // explicit conversion to string std::string s = j.dump(); // {"happy":true,"pi":3.141} // serialization with pretty printing // pass in the amount of spaces to indent std::cout << j.dump(4) << std::endl; // { // "happy": true, // "pi": 3.141 // }. Note the library only supports UTF-8. When you store strings with different encodings in the library, calling dump() may throw an exception. To/from streams (e.g. files, string streams) parse); SAX interface The library uses a SAX-like interface with the following functions: // called when null is parsed bool null(); // called when a boolean is parsed; value is passed bool boolean(bool val); // called when a signed or unsigned integer number is parsed; value is passed bool number_integer(number_integer_t val); bool number_unsigned(number_unsigned_t val); // called when a floating-point number is parsed; value and original string is passed bool number_float(number_float_t val, const string_t& s); // called when a string is parsed; value is passed and can be safely moved away bool string(string_t& val); // called when an object or array begins or ends, resp. The number of elements is passed (or -1 if not known) bool start_object(std::size_t elements); bool end_object(); bool start_array(std::size_t elements); bool end_array(); // called when an object key is parsed; value is passed and can be safely moved away bool key(string_t& val); // called when a parse error occurs; byte position, the last token, and an exception is passed bool parse_error(std::size_t position, const std::string& last_token, const detail::exception& ex); The return value of each function determines whether parsing should proceed. To implement your own SAX handler, proceed as follows: - Implement the SAX interface in a class. You can use class nlohmann::json_sax<json>as base class, but you can also use any class where the functions described above are implemented and public. - Create an object of your SAX interface class, e.g. my_sax. - Call bool json::sax_parse(input, &my_sax); where the first parameter can be any input like a string or an input stream and the second parameter is a pointer to your SAX interface. Note the sax_parse function only returns a bool indicating the result of the last executed SAX event. It does not return a json value – it is up to you to decide what to do with the SAX events. Furthermore, no exceptions are thrown in case of a parse error – it is up to you what to do with the exception object passed to your parse_error implementation. Internally, the SAX interface is used for the DOM parser (class json_sax_dom_parser) as well as the acceptor ( json_sax_acceptor), see file json_sax.h"); Conversion from STL containers Any sequence container ( std::array, std::vector, std::deque, std::forward_list, std::list) whose values can be used to construct JSON values on); // both entries for "one" are used // maybe ["one", "two", values (see examples above) can be used} JSON Pointer and JSON Patch" } // ] JSON Merge Patch" // } // } Implicit conversions. Note that char types are not automatically converted to JSON strings, but to integer numbers. A conversion to a string must be specified explicitly: char ch = 'A'; // ASCII value 65 json j_default = ch; // stores integer number 65 json j_string = std::string(1, ch); // stores string "A"). - Those methods MUST be available (e.g., properly headers must be included) everywhere you use the implicit conversions. Look at issue 1108 for errors that may occur otherwise. - When using get<your_type>(), your_typeMUST be DefaultConstructible. (There is a way to bypass this requirement described later.) - In function from_json, use function at()to access the object values rather than operator[]. In case a key does not exist,. - Be careful with the definition order of the from_json/ to_jsonfunctions: If a type Bhas a member of type A, you MUST define to_json(A)before to_json(B). Look at issue 561 for more details., MessagePack, and UBJSON) Though JSON is a ubiquitous data format, it is not a very compact format suitable for data exchange, for instance over a network. Hence, the library supports CBOR (Concise Binary Object Representation), MessagePack, and UBJSON (Universal Binary JSON Specification)); Supported compilers Though it’s 2018 already, the support for C++11 is still a bit sparse. Currently, the following compilers are known to work: - GCC 4.9 – 8.2 (and possibly later) - Clang 3.4 – 6.1 (and possibly later) - Intel C++ Compiler 17.0.2 (and possibly later) - Microsoft Visual C++ 2015 / Build Tools 14.0.25123.0 (and possibly later) - Microsoft Visual C++ 2017 / Build Tools 15.5.180.51428 (and possibly later). - Unsupported versions of GCC and Clang are rejected by #errordirectives. This can be switched off by defining JSON_SKIP_UNSUPPORTED_COMPILER_CHECK. Note that you can expect no support in this case. The following compilers are currently used in continuous integration at Travis and AppVeyor: License The class class contains the UTF-8 Decoder from Bjoern Hoehrmann which is licensed under the MIT License (see above). Copyright © 2008-2009 Björn Hoehrmann [email protected]. Security Commits by Niels Lohmann and releases are signed with this PGP Key. and split the single header file into smaller chunks. - misleading. - Alex noted an error in a code sample. - Tom de Geus reported some warnings with ICC and helped fixing them. - Perry Kundert simplified reading from input streams. - Sonu Lohani fixed a small compilation error. - Jamie Seward fixed all MSVC warnings. - Nate Vargas added a Doxygen tag file. - pvleuven helped fixing a warning in ICC. - Pavel helped fixing some warnings in MSVC. - Jamie Seward avoided unnecessary string copies in find()and count(). - Mitja fixed some typos. - Jorrit Wronski updated the Hunter package links. - Matthias Möller added a .natvisfor the MSVC debug view. - bogemic fixed some C++17 deprecation warnings. - Eren Okka fixed some MSVC warnings. - abolz integrated the Grisu2 algorithm for proper floating-point formatting, allowing more roundtrip checks to succeed. - Vadim Evard fixed a Markdown issue in the README. - zerodefect fixed a compiler warning. - Kert allowed to template the string type in the serialization and added the possibility to override the exceptional behavior. - mark-99 helped fixing an ICC error. - Patrik Huber fixed links in the README file. - johnfb found a bug in the implementation of CBOR’s indefinite length strings. - Paul Fultz II added a note on the cget package manager. - Wilson Lin made the integration section of the README more concise. - RalfBielig detected and fixed a memory leak in the parser callback. - agrianius allowed to dump JSON to an alternative string type. - Kevin Tonon overworked the C++11 compiler checks in CMake. - Axel Huebl simplified a CMake check and added support for the Spack package manager. - Carlos O’Ryan fixed a typo. - James Upjohn fixed a version number in the compilers section. - Chuck Atkins adjusted the CMake files to the CMake packaging guidelines - Jan Schöppach fixed a typo. - martin-mfg fixed a typo. - Matthias Möller removed the dependency from std::stringstream. - agrianius added code to use alternative string implementations. - Daniel599 allowed to use more algorithms with the items()function. - Julius Rakow fixed the Meson include directory and fixed the links to cppreference.com. - Sonu Lohani fixed the compilation with MSVC 2015 in debug mode. - grembo fixed the test suite and re-enabled several test cases. - Hyeon Kim introduced the macro JSON_INTERNAL_CATCHto control the exception handling inside the library. - thyu fixed a compiler warning. Thanks a lot for helping out! Please let me know if I forgot someone. Used third-party tools The library itself consists of a single header file licensed under the MIT license. However, it is built, tested, documented, and whatnot using a lot of third-party tools and services. Thanks a lot! - amalgamate.py – Amalgamate C source and header files to create a single header file - American fuzzy lop for fuzz testing - AppVeyor for continuous integration on Windows - Artistic Style for automatic source code identation - Catch for the unit tests - Clang for compilation with code sanitizers - Cmake for build automation - Codacity for further code analysis - Coveralls to measure code coverage - Coverity Scan for static analysis - cppcheck for static analysis - Doxygen to generate documentation - git-update-ghpages to upload the documentation to gh-pages - GitHub Changelog Generator to generate the ChangeLog - Google Benchmark to implement the benchmarks - libFuzzer to implement fuzz testing for OSS-Fuzz - OSS-Fuzz for continuous fuzz testing of the library (project repository) - Probot for automating maintainer tasks such as closing stale issues, requesting missing information, or detecting toxic comments. - or serialization(integration) or nlohmann::fifo_map(integration). Execute unit tests To compile and run the tests, you need to execute $ mkdir build $ cd build $ cmake .. $ cmake --build . $ ctest --output-on-failure For more information, have a look at the file .travis.yml. Latest podspec { "name": "nlohmann_json", "version": "3.2.0", "summary": "JSON for Modern C++", "description": "JSON library for modern c++, it can read & write JSON. If you know how tonuse std::vector or std::map, you are already set.", "homepage": "", "screenshots": "", "license": { "type": "MIT", "text": "MIT License nnCopyright (c) 2013-2018 Niels Lohmann": { "Niels Lohmann": "[email protected]" }, "platforms": { "ios": "8.0" }, "source": { "git": "", "tag": "v3.2.0" }, "source_files": "single_include/**/*.hpp", "public_header_files": "single_include/nlohmann/json.hpp" } Sat, 25 Aug 2018 05:00:05 +0000
https://tryexcept.com/articles/cocoapod/nlohmann_json
CC-MAIN-2018-43
refinedweb
1,918
57.47
The SCC is single-threaded, and so is this module. When the scc_crypt() routine is called, it will lock out other accesses to the function. If another task is already in the module, the subsequent caller will spin on a lock waiting for the other access to finish. Note that long crypto operations could cause a task to spin for a while, preventing other kernel work (other than interrupt processing) to get done. The external (kernel module) interface is through the following functions: Definition in file scc2_driver.c. #include "sahara2/include/portable_os.h" #include "scc2_internals.h" #include <linux/delay.h> #include <linux/platform_device.h> #include <linux/clk.h> #include <linux/err.h> #include <linux/dmapool.h> Go to the source code of this file.
http://linux-fsl-imx51.sourcearchive.com/documentation/2.6.31-605.9/scc2__driver_8c.html
CC-MAIN-2017-30
refinedweb
124
53.88
July 2018 Volume 33 Number 7 [Machine Learning] Machine Learning with IoT Devices on the Edge Imagine that,in the not too distant future, you’re the designer of a smart traffic intersection. Your smart intersection has. This is an example of ML on an IoT device on the edge. I use the term edge device to mean anything connected to the cloud, where cloud refers to something like Microsoft Azure or a company’s remote servers. In this article, I’ll explain two ways you can design ML on the edge. Specifically, I’ll describe how to write a custom model and IO function for a device, and how to use the Microsoft Embedded Learning Library (ELL) set of tools to deploy an optimized ML model to a device on the edge. The custom IO approach is currently, as I write this article, the most common way to deploy an ML model to an IoT device. The ELL approach is forward-looking. Even if you’re not working with ML on IoT devices, there are at least three reasons why you might want to read this article. First, the design principles involved generalize to other software development scenarios. Second, it’s quite possible that you’ll be working with ML and IoT devices relatively soon. Third, you may just find the techniques described here interesting in their own right. Why does ML need to be on the IoT edge? Why not just do all processing in the cloud? IoT devices on the edge can be very inexpensive, but they often have limited memory, limited processing capability and a limited power supply. In many scenarios, trying to perform ML processing in the cloud has several drawbacks. Latency is often a big problem. In the smart traffic intersection example, a delay of more than a fraction of a second could have disastrous consequences. Additional problems with trying to perform ML in the cloud include reliability (a dropped network connection is typically impossible to predict and difficult to deal with), network availability (for example, a ship at sea may have connectivity only when a satellite is overhead) and privacy/security (when, for example, you’re monitoring a patient in a hospital.) This article doesn’t assume you have any particular background or skill set but does assume you have some general software development experience. The demo programs described in this article (a Python program that uses the CNTK library to create an ML model, a C program that simulates IoT code and a Python program that uses an ELL model) are too long to present here, but they’re available in the accompanying file download. What Is a Machine Learning Model? In order to understand the issues with deploying an ML model to an IoT device on the edge, you must understand exactly what an ML model is. Very loosely speaking, an ML model is all the information needed to accept input data, make a prediction and generate output data. Rather than try to explain in the abstract, I’ll illustrate the ideas using a concrete example. Take a look at the screenshot in Figure 1 and the diagram in Figure 2. The two figures show a neural network with four input nodes, five hidden layer processing nodes and three output layer nodes. The input values are (6,1, 3.1, 5.1, 1.1) and the output values are (0.0321, 0.6458, 0.3221). Figure 1 shows how the model was developed and trained. I used Visual Studio Code, but there are many alternatives. Figure 1 Creating and Training a Neural Network Model .png) Figure 2 The Neural Network Input-Output Mechanism This particular example involves predicting the species of an iris flower using input values that represent sepal (a leaf-like structure) length and width and petal length and width. There are three possible species of flower: setosa, versicolor, virginica. The output values can be interpreted as probabilities (note that they sum to 1.0) so, because the second value, 0.6458, is largest, the model’s prediction is the second species, versicolor. In Figure 2, each line connecting a pair of nodes represents a weight. A weight is just a numeric constant. If nodes are zero-base indexed, from top to bottom, the weight from input[0] to hidden[0] is 0.2680 and the weight from hidden[4] to output[0] is 0.9381. Each hidden and output node has a small arrow pointing into the node. These are called biases. The bias for hidden[0] is 0.1164 and the bias for output[0] is -0.0466. You can think of a neural network as a complicated math function because it just accepts numeric input and produces numeric output. An ML model on an IoT device needs to know how to compute output. For the neural network in Figure 2, the first step is to compute the values of the hidden nodes. The value of each hidden node is the hyperbolic tangent (tanh) function applied to the sum of the products of inputs and associated weights, plus the bias. For hidden[0] the calculation is: hidden[0] = tanh((6.1 * 0.2680) + (3.1 * 0.3954) + (5.1 * -0.5503) + (1.1 * -0.3220) + 0.1164) = tanh(-0.1838) = -0.1817 Hidden nodes [1] through [4] are calculated similarly. The tanh function is called the hidden layer activation function. There are other activation functions that can be used, such as logistic sigmoid and rectified linear unit, which would give different hidden node values. After the hidden node values have been computed, the next step is to compute preliminary output node values. A preliminary output node value is just the sum of products of hidden nodes and associated hidden-to-output weights, plus the bias. In other words, the same calculation as used for hidden nodes, but without the activation function. For the preliminary value of output[0] the calculation is: o_pre[0] = (-0.1817 * 0.7552) + (-0.0824 * -0.7297) + (-0.1190 * -0.6733) + (-0.9287 * 0.9367) + (-0.9081 * 0.9381) + (-0.0466) = -1.7654 The values for output nodes [1] and [2] are calculated in the same way. After the preliminary values of the output nodes have been computed, the final output node values can be converted to probabilities using the softmax activation function. The softmax function is best explained by example. The calculations for the final output values are: sum = exp(o_pre[0]) + exp(o_pre[1]) + exp(o_pre[2]) = 0.1711 + 3.4391 + 1.7153 = 5.3255 output[0] = exp(o_pre[0]) / sum = 0.1711 / 5.3255 = 0.0321 output[1] = exp(o_pre[1]) / sum = 3.4391 / 5.3255 = 0.6458 output[2] = exp(o_pre[2]) / sum = 1.7153 / 5.3255 = 0.3221 As with the hidden nodes, there are alternative output node activation functions, such as the identity function. To summarize, an ML model is all the information needed to accept input data and generate an output prediction. In the case of a neural network, this information consists of the number of input, hidden and output nodes, the values of the weights and biases, and the types of activation functions used on the hidden and output layer nodes. OK, but where do the values of the weights and the biases come from? They’re determined by training the model. Training is using a set of data that has known input values and known, correct output values, and applying an optimization algorithm such as back-propagation to minimize the difference between computed output values and known, correct output values. There are many other kinds of ML models, such as decision trees and naive Bayes, but the general principles are the same. When using a neural network code library such as Microsoft CNTK or Google Keras/TensorFlow, the program that trains an ML model will save the model to disk. For example, CNTK and Keras code resembles: mp = ".\\Models\\iris_nn.model" model.save(mp, format=C.ModelFormat.CNTKv2) # CNTK model.save(".\\Models\\iris_model.h5") # Keras ML libraries also have functions to load a saved model. For example: mp = ".\\Models\\iris_nn.model" model = C.ops.functions.Function.load(mp) # CNTK model = load_model(".\\Models\\iris_model.h5") # Keras Most neural network libraries have a way to save just a model’s weights and biases values to file (as opposed to the entire model). Deploying a Standard ML Model to an IoT Device The image in Figure 1 shows an example of what training an ML model looks like. I used Visual Studio Code as the editor and the Python language API interface to the CNTK v2.4 library. Creating a trained ML model can take days or weeks of effort, and typically requires a lot of processing power and memory. Therefore, model training is usually performed on powerful machines, often with one or more GPUs. Additionally, as the size and complexity of a neural network increases, the number of weights and biases increases dramatically, and so the file size of a saved model also increases greatly. For example, the 4-5-3 iris model described in the previous section has only (4 * 5) + 5 + (5 * 3) + 3 = 43 weights and biases. But an image classification model with millions of input pixel values and hundreds of hidden processing nodes can have hundreds of millions, or even billions, of weights and biases. Notice that the values of all 43 weights and biases of the iris example are shown in Figure 1: [[ 0.2680 -0.3782 -0.3828 0.1143 0.1269] [ 0.3954 -0.4367 -0.4332 0.3880 0.3814] [-0.5503 0.6453 0.6394 -0.6454 -0.6300] [-0.322 0.4035 0.4163 -0.3074 -0.3112]] [ 0.1164 -0.1567 -0.1604 0.0810 0.0822] [[ 0.7552 -0.0001 -0.7706] [-0.7297 -0.2048 0.9301] [-0.6733 -0.2512 0.9167] [ 0.9367 -0.4276 -0.5134] [ 0.9381 -0.3728 -0.5667]] [-0.0466 0.4528 -0.4062] So, suppose you have a trained ML model. You want to deploy the model to a small, weak, IoT device. The simplest solution is to install onto the IoT device the same neural network library software you used to train the model. Then you can copy the saved trained model file to the IoT device and write code to load the model and make a prediction. Easy! Unfortunately, this approach will work only in relatively rare situations where your IoT device is quite powerful—perhaps along the lines of a desktop PC or laptop. Also, neural network libraries such as CNTK and Keras/TensorFlow were designed to train models quickly and efficiently, but in general they were not necessarily designed for optimal performance when performing input-output with a trained model. In short, the easy solution for deploying a trained ML model to an IoT device on the edge is rarely feasible. The Custom Code Solution Based on my experience and conversations with colleagues, the most common way to deploy a trained ML model to an IoT device on the edge is to write custom C/C++ code on the device. The idea is that C/C++ is almost universally available on IoT devices, and C/C++ is typically fast and compact. The demo program in Figure 3 illustrates the concept. Figure 3 Simulation of Custom C/C++ IO Code on an IoT Device .png) The demo program starts by using the gcc C/C++ tool to compile file test.c into an executable on the target device. Here, the target device is just my desktop PC but there are C/C++ compilers for almost every kind of IoT/CPU device. When run, the demo program displays the values of the weights and biases of the iris flower example, then uses input values of (6.1, 3.1, 5.1, 1.1) and computes and displays the output values (0.0321, 0.6458, 0.3221). If you compare Figure 3 with Figures 1 and 2, you’ll see the inputs, weights and biases, and outputs are the same (subject to rounding error). Demo program test.c implements only the neural network input-output process. The program starts by setting up a struct data structure to hold the number of nodes in each layer, values for the hidden and output layer nodes, and values of the weights and biases: #include <stdio.h> #include <stdlib.h> #include <math.h> // Has tanh() typedef struct { int ni, nh, no; float *h_nodes, *o_nodes; // No i_nodes float **ih_wts, **ho_wts; float *h_biases, *o_biases; } nn_t; The program defines the following functions: construct(): initialize the struct free(): deallocate memory when done set_weights(): assign values to weights and biases softmax(): the softmax function predict(): implements the NN IO mechanism show_weights(): a display helper The key lines of code in the demo program main function look like: nn_t net; // Neural net struct construct(&net, 4, 5, 3); // Instantiate the NN float wts[43] = { // specify the weights and biases 0.2680, -0.3782, -0.3828, 0.1143, 0.1269, . . . -0.0466, 0.4528, -0.4062 }; set_weights(&net, wts); // Copy values into NN float inpts[4] = { 6.1, 3.1, 5.1, 1.1 }; // Inputs int shownodes = 0; // Don’t show float* probs = predict(net, inpts, shownodes); The point is that if you know exactly how a simple neural network ML model works, the IO process isn’t magic. You can implement basic IO quite easily. The main advantage of using a custom C/C++ IO function is conceptual simplicity. Also, because you’re coding at a very low level (really just one level of abstraction above assembly language), the generated executable code will typically be very small and run very fast. Additionally, because you have full control over your IO code, you can use all kinds of tricks to speed up performance or reduce memory footprint. For example, program test.c uses type float but, depending on the problem scenario, you might be able to use a custom 16-bit fixed-point data type. The main disadvantage of using a custom C/C++ IO approach is that the technique becomes increasingly difficult as the complexity of the trained ML model increases. For example, an IO function for a single hidden layer neural network with tanh and softmax activation is very easy to implement—taking only about one day to one week of development effort, depending on many factors, of course. A deep neural network with several hidden layers is somewhat easy to deal with—maybe a week or two of effort. But implementing the IO functionality of a convolutional neural network (CNN) or a long, short-term memory (LSTM) recurrent neural network is very difficult and would typically require much more than four weeks of development effort. I suspect that as the use of IoT devices increases, there will be efforts to create open source C/C++ libraries that implement the IO for ML models created by different neural network libraries such as CNTK and Keras/TensorFlow. Or, if there’s enough demand, the developers of neural network libraries might create C/C++ IO APIs for IoT devices themselves. If you had such a library, writing custom IO for an IoT device would be relatively simple. The Microsoft Embedded Learning Library The Microsoft Embedded Learning Library (ELL) is an ambitious open source project intended to ease the development effort required to deploy an ML model to an IoT device on the edge (microsoft.github.io/ELL). The basic idea of ELL is illustrated on the left side of Figure 4. Figure 4 The ELL Workflow Process, High-Level and Granular In words, the ELL system accepts an ML model created by a supported library, such as CNTK, or a supported model format, such as open neural network exchange (ONNX). The ELL system uses the input ML model and generates an intermediate model as an .ell file. Then the ELL system uses the intermediate .ell model file to generate executable code of some kind for a supported target device. Put another way, you can think of ELL as a sort of cross-compiler for ML models. A more granular explanation of how ELL works is shown on the right side of Figure 4, using the iris flower model example. The process starts with an ML developer writing a Python program named iris_nn.py to create and save a prediction model named iris_cntk.model, which is in a proprietary binary format. This process is shown in Figure 1. The ELL command-line tool cntk_import.py is then used to create an intermediate iris_cntk.ell file, which is stored in JSON format. Next, the ELL command-line tool wrap.py is used to generate a directory host\build of C/C++ source code files. Note that “host” means to take the settings from the current machine, so a more common scenario would be something like \pi3\build. Then the cmake.exe C/C++ compiler-build tool is used to generate a Python module of executable code, containing the logic of the original ML model, named iris_cntk. The target could be a C/C++ executable or a C# executable or whatever is best-suited for the target IoT device. The iris_cntk Python module can then be imported by a Python program (use_iris_ell_model.py) on the target device (my desktop PC), as shown in Figure 5. Notice that the input values (6.1, 3.1, 5.1, 1.1) and output values (0.0321, 0.6457, 0.3221) generated by the ELL system model are the same as the values generated during model development (Figure 1) and the values generated by the custom C/C++ IO function (Figure 3). Figure 5 Simulation of Using an ELL Model on an IoT Device The leading “(py36)” before the command prompts in Figure 5 indicate I’m working in a special Python setting called a Conda environment where I’m using Python version 3.6, which was required at the time I coded my ELL demo. The code for program use_iris_ell_model.py is shown in Figure 6. The point is that ELL has generated a Python module/package that can be used just like any other package/module. Figure 6 Using an ELL Model in a Python Program # use_iris_ell_model.py # Python 3.6 import numpy as np import tutorial_helpers # used to find package import iris_cntk as m # the ELL module/package print("\nBegin use ELL model demo \n") unknown = np.array([[6.1, 3.1, 5.1, 1.1]], dtype=np.float32) np.set_printoptions(precision=4, suppress=True) print("Input to ELL model: ") print(unknown) predicted = m.predict(unknown) print("\nPrediction probabilities: ") print(predicted) print("\nEnd ELL demo \n" The ELL system is still in the very early stages of development, but based on my experience, the system is ready for you to experiment with and is stable enough for limited production development scenarios. I expect your reaction to the diagram of the ELL process in Figure 4 and its explanation is something like, “Wow, that’s a lot of steps!” At least, that was my reaction. Eventually, I expect the ELL system to mature to a point where you can generate a model for deployment to an IoT device along the lines of: source_model = ".\\iris_cntk.model" target_model = ".\\iris_cortex_m4.model" ell_generate(source_model, target_model) But for now, if you want to explore ELL you’ll have to work with several steps. Luckily, the ELL tutorial from the ELL Web site on which much of this article is based is very good. I should point out that to get started with ELL you must install ELL on your desktop machine, and installation consists of building C/C++ source code—there’s no .msi installer for ELL (yet). A cool feature of ELL that isn’t obvious is that it performs some very sophisticated optimization behind the scenes. For example, the ELL team has explored ways to compress large ML models, including sparsification and pruning techniques, and replacing floating point math with 1-bit math. The ELL team is also looking at algorithms that can be used in place of neural networks, including improved decision trees and k-DNF classifiers. The tutorials on the ELL Web site are quite good, but because there are many steps involved, they are a bit long. Let me briefly sketch out the process so you can get a feel for what installing and using ELL is like. Note that my commands are not syntactically correct; they’re highly simplified to keep the main ideas clear. Installing the ELL system resembles: x> (install several tools such as cmake and BLAS) > git clone > cd ELL > nuget.exe restore external/packages.config -PackagesDirectory external > md build > cd build > cmake -G "Visual Studio 15 2017 Win64" .. > cmake --build . --config Release > cmake --build . --target _ELL_python --config Release In words, you must have quite a few tools installed before starting, then you pull the ELL source code down from GitHub and then build the ELL executable tools and Python binding using cmake. Creating an ELL model resembles: > python cntk_import.py iris_cntk.model > python wrap.py iris_nn_cntk.ell --language python --target host > cd host > md build > cd build > cmake -G "Visual Studio 15 2017 Win64" .. && cmake --build . --config release That is, you use ELL tool cntk_import.py to create a .ell file from a CNTK model file. You use wrap.py to generate a lot of C/C++ specific to a particular target IoT device. And you use cmake to generate executables that encapsulate the original trained ML model’s behavior. Wrapping Up To summarize, a machine learning model is all the information needed for a software system to accept input and generate a prediction. Because IoT devices on the edge often require very fast and reliable performance, it’s sometimes necessary to compute ML predictions directly on a device. However, IoT devices are often small and weak, so you can’t simply copy a model that was developed on a powerful desktop machine to the device. A standard approach is to write custom C/C++ code, but this approach doesn’t scale to complex ML models. An emerging approach is the use of ML cross-compilers, such as the Microsoft Embedded Learning Library. When fully mature and released, the ELL system will quite likely make developing complex ML models for IoT devices on the edge dramatically easier than it is today.: Byron Changuion, Chuck Jacobs, Chris Lee and Ricky Loynd Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2018/july/machine-learning-machine-learning-with-iot-devices-on-the-edge
CC-MAIN-2020-10
refinedweb
3,753
53.51
One thing i've always wondered is why there's no obvious possibility of developing reusable web user control libraries within Visual Studio .NET. There is the “web control library” project template, which is supposed to contain web custom controls, but there is no straightforward way of adding web user controls to that library. Maybe a lot of folks like developing web custom controls libraries, but for me the custom (or lately composite) controls don't quite cut it. I mean, let's face it, for a composite control (that is a control whose sole purpose in life is to bind together other existing controls) would you rather code your control's layout or design it in markup? The answer's obvious (if you're not a masochist) and the balance clearly leans towards user controls, since they provide a clear “Page-like” separation between markup and code. But enough rambling, here's the deal: although, as I said, there is no straightforward way of reusing user controls across applications, it turns out there are at least three ways to do it (that are not so straightforward :)): 1. As explained by Scott Guthrie here and here, you could basically build a reusable user control library as a web site. More to the point, you create a web site (or web application) project, add your user controls to it and compile it (meaning that you get one or more assemblies with code and one or more .ascx files representing your controls' markup). Then, in the web site where you want to use the control library (that is, the website containg user controls), you can just add a reference to the previously built assemblies, copy the .ascx files in your website and start dropping them on the pages of your choice. The solution is pretty ok as far as I'm concerned, the only thing about it that I don't like is that it doesn't feel “natural”. I mean, after all, you do use a web site (or application) as a control library. While there might not be any implications to this (although there are, one that comes to mind is the inability to have any helper code in the user control website/library in asp.net 2.0, because it would have to reside in the App_Code folder and then, since there are no namespaces in the asp.net 2.0 web sites, it might collide with the App_Code of the web site it's being used in) I'm still not quite at peace with this kind of deployment. 2. As explained in this excellent article on codeproject, you could basically build a custom control library, add a custom control to it and, for that control, also add a .ascx file by hand and set its build action to embedded resource. Then, the custom control would load the markup from the resources at runtime, do a ParseControl on it to create a control and then add the newly created control to its (the custom control's) control collection. This solution is also pretty good, the only nasty thing is that, since the custom control is nothing more than a placeholder for the user control that gets created from the markup at runtime, the custom control must assign the correct references (from the newly Parsed control from markup) to any child controls declared in it it wants to have. For complex controls this can be a drag and although it could be automated, it would be time consuming. On the bright side, from the client's point of view, your control is very easy to use. No need to copy .ascx files anywhere, just reference the library and you're good to go. 3. Basically a mix of the above, it goes like this: create a class library project. Add both a code file and a .ascx file (for the markup; you will have to add the .ascx file by hand, maybe by generating it/copying it from outside visual studio in the project's folder and then including it in the project). Because I have a short memory I usually start by copying the contents of a clean user control that VS.NET generates when you add one to a regular web site. Let your control derive from UserControl (not as in 2, from Control or CompositeControl). Build your control to your liking, you even have designer support (in VS.NET 2005 at least, haven't tested in 2003). Now build the class library, you should get an assembly plus a number of .ascx files containing the markup. Reuse them in the same fashion described at 1. You have to be careful to delete the CodeFile attribute of the @Control directive or you will get a "file not found" error when building the web site using the control (since the .cs code file is already built into the reusable assembly). Also, when loading the control dinamically, through Page.LoadControl (note that you can also drag-and-drop the ascx file into the page in the designer and there's nothing else you must do) be careful to add it to a runat="server" control, such as a Placeholder, or you will get a runtime error. To wrap up, what I like about this solution is that it kinda gives me the best of both worlds (1 & 2), without incurring the "overhead" and problems of reusing an entire web site/application (as 1) and without the need to resort to mapping controls and embedding resources (as 2), while still being able to contain other code that it might need (unlike 1). So there you have it, three ways I know of to reuse web user controls. I'd be happy to learn more if you have any ideas and, of course, any feedback on the above is greatly appreciated. Happy reusing ;)
http://geekswithblogs.net/dotnetrodent/archive/2006/06/16/82136.aspx
CC-MAIN-2017-09
refinedweb
982
66.07
Related Articles: Physical Filtration, Review of the ViaAqua Canister Filter, Related FAQs: Canister Filters 1, Canister Filters 2, Outside Power Filters, & FAQs on: Rationale/Use, Placement, Media, Plumbing, Maintenance, Troubleshooting/Repair, By Manufacturer: Eheim, Fluval, Other Brands, Alteration to a canister filter for increased suction/Water Flow 8/8/11 Hi Crew.. <Hello Srinivas> Am back again.. <That phrase is reserved for Arnold. :-)> Have a small query and sure to sound stupid too.. anyway feel better to take your advice rather than doing anything stupid. I have a 220 Gallon FOWL <FOWLR> with a 35-40 Gallon sump. The sump is a 3 chamber set up with one having a DSB , one chamber hosing <housing> a Protein skimmer nad <and> the central chamber with the return pump. There are several pumps placed on the side walls of the 6' ft display tank nad <and> cause much movement of the water through the tank. While this takes care of the basic filtration, I still find fine particles of algae/debris/etc floating in the display tank. The movements in the tank keep it afloat and many such particles are nit <not> rushed through the overflow hole that flows to the sump. I have a spare Eheim Canister filter (though much lesser in capacity for this tank) which I thought for using to suck in these floating matter. The problem is that the suction is too low and particles just float besides ( if they do not happen to touch the intake pipe end) pipes and remain visible in the tank. They are more irritating on days when I clean the tank (weekly once). Can I attach a powerhead ( more powerful than the canister powerhead) to the suction tube to increase the suction rate? My purpose is just to suck the floating matter and not for water circulation? Kindly advice <You just need to fine tune/experiment with your powerheads by pointing the output in different directions until you get your desired results. May even want to try placing one or two of the pumps near the bottom of the tank. James (Salty Dog)> Best Filter Canister, sel. 1/21/11 Hi there how are you today? <Fine, thank you> I have a 95 gallon FOWLR aquarium of saltwater with a 404 Fluval canister with 4 filter media basket (carbon, foam, BioMax and Clearmax), but I want to upgrade to a better canister, when I say better I mean 2010-2011 model. <Mmmm> I saw the JBJ Reaction 4-Stage Canister Filter + UV and it caught my eye, its not that expensive and has also 4 media baskets, look really neat. Would you recommend me this filter?, if not what other filters are there that are good comparing my actual Filter canister and affordable price. Thank You, have a nice day ! Ingrid Leija <Though these lines of canister filters have GREATLY improved over the years, my favorite brand/manufacturer is still Eheim: Quiet, dependable, energy-conserving... VERY long lasting. Bob Fenner> in a SW setup? :Eheim ECCO 2236 6/1/2009 Hello All, <Hi> I hope you are enjoying the warming weather. <Well, stuck in Montreal Canada at the moment, so it is a bit cooler than what I am used to in Florida.> I have a questions regarding using (or not) the Eheim EECO 2236. I received one as a gift and I am not sure how I can use it in my current setup: <Some of us are for them, other are against, I have two big Eheims on my tank.> 2 juvenile maroon clowns. 100lbs of Live rock (no corals, yet) <OK> 62.5 gallon with a 20 gallon sump with 15 setup as a refugium (4 inch sand bed with small live rock fragments, also Chaeto with lights running against the tank cycle (12 on / 12 off) <Sounds good.> All the other mechanics, Aqua C EV120 protein skimmer (Your suggestion IT ROCKS!!), heater/chiller 3Xpower heads. My levels are all acceptable and I have no issues currently with nitrates, ammonia, PH, etc.. (except for evaporation, living here in LAS VEGAS!!!) Is there any use for this freebee? <Certainly. Canister filters work differently in a marine setup. I would not use it for biological filtration - let the live rock take care of that. They are good for mechanical and chemical filtration, as well as additional water movement You do have to be stricter on the maintenance, I clean mine every two weeks.> I should say that I am setting up a 50g fish only system (any suggestions about a puffer and tank mates) that has a Fluval 304 and a Coralife Super Skimmer, maybe the Eheim can work hand in hand with the Fluval? <Not much else with a puffer in a 50 gallon.> To Eheim or not to Eheim.. That is the question. <Eheim.> Thanks again for your help. <My pleasure, MikeV> Re: Canister filter in a SW setup? :Eheim ECCO 2236 6/1/2009 Thanks, What media would you use for mechanical and chemical filtration? <For mechanical filtration, I just use the bulk filter pad available at any pet store and cut it to fit. For chemical filtration,. I use plain activated carbon, plus a Poly-Filter as needed.> I am new to the world of canister filtration. I assume I need to remove the "Substrat" ? <Yes, the Substrat is for biological filtration, and should be left out of a canister filter if you have live rock.> Thanks <MikeV> I need help - PLEASE! SW canister filter media 12/21/08 Hello, <Hello Roxie> I have a huge dilemma and need help with this. I recently purchased a 75 gallon bow front aquarium. I have set it up as a salt water tank. When i got the fish tank from my brother, everything was included in it. Unfortunately, I have misplaced a vital component. I am not sure what exactly it does, but my brother says it is very important. I think it is a part of the protein skimmer or the filtration system. It is a back sponge-like material shaped in a cylinder. It was approximately 2-5 inches tall and about 2-3 inches wide. It had a "ball" inside of it. <Mmm... interesting... Would you please send along some photos of this outfit, particularly the area where this filter element goes?> I looked on the internet for over an hour tonight and I still don't know what this part is. <Mmm, I can't tell from the description either> There aren't many pictures to look at to compare, that is why I am seeking your expertise. The product name on the protein skimmer is JEBO and the other is a Cascade Canister Filter. <Oh... likely some part of the filter media here... an insert-able pad that goes near the top and or bottom... Mmm, maybe: 272293187QQcmdZViewItemQQptZLH_DefaultDomain_0?hash=item330272293187#ebaypho tohosting no worries re substituting other material instead. Please read here: and the linked files where you lead yourself> Please help me. I don't want my fish to die. Thank you for your time, Roxie <Mmm, I would do a bit of reading re marine filtration period... There may well be other options, and definitely more that you'll want to know... to assure your success as an aquarist. Cheers, Bob Fenner> Re: Filter Question 4/23/08 Hi James <Hello Liyu> Thanks for the quick reply. <Welcome>>> Nitrate and Canister filters, Nitrate Sources 10/2/07 Hi, <Hello> So I've recently decided to add some corals to my 3 month old 65 gal with a 6" DSB, (green star polyp, a brown mushroom and a waving hand). My set up is a marineland C-360 canister filter filled with carbon, phos-guard, Purigen and filter sponge, I took the bio-balls out because they were adding to nitrates. <The nitrates are still being created in the same amount, now just hopefully in an area more conducive to nitrate reduction such as LR or your DSB.> I also have a 15 gal refugium with Caulerpa lit 24/7, which is growing nicely, an Aqua C Remora, a mat of Chaeto growing the main tank to supply food for a Mandarin, <You are aware that Mandarins do not eat algae?> and a submersible UV sterilizer. My nitrates have been constantly at around 10-15ppm for a few weeks now, after dropping initially following the introduction of the refugium. Everything is fine otherwise, my ammonia, phosphate, nitrite are 0, pH-8.2, calcium 380-400, alkalinity 10dKH, and my corals seem to be doing great, all opening and waving nicely in the water. So my question is this: I was considering getting rid of the canister filter, because I read that they can become nitrate factories, and figuring that this may be where the nitrates are coming from, was wondering if my tank would benefit from switching the canister with a Tidepool II from Marineland. <Will be in the same boat, both just trap detritus instead of removing it from the system like skimmers and water changes do.> I do regular feeding of three flakes of ocean nutrition (about 2" in diameter), and every other day replacing this with some frozen foods soaked in garlic. <I would switch to pellets from flakes, they hold their nutrients better, especially water soluble vitamins.> My tanks inhabitants consist of 4 perculas, 1 coral beauty, 2 Chromis, 1 blue tang and the mandarin goby. <Think you are going to have trouble with this mix.> I do not think I am overfeeding them, because they are constantly hungry, and seem to be pretty healthy with this feeding schedule. I do a 10gal water change bi-weekly. I was also wondering about De-Nitrate from SeaChem, which I am currently using in my refugium. Does it really do as it says, because it looks just like small porous rocks, which would most likely house aerobic bacteria along with the anaerobic, and create nitrates as well as remove them. So I guess the point of this rambling question is should I remove the canister and De-Nitrate? Or do you think something else may be contributing to my nitrate levels? Thanks, Dan <I would not use the De-Nitrate, only a bandage over the problem, not the cause. At your current nitrate levels I think if you do weekly water changes that would be enough. On a side note please take a second look at your stocking, I think you are headed for trouble here.> . Re: Canister filter 3/18/06 Thank you James for responding so quick. I understand what you are saying, but lets just say that I would clean the filter every week. I only have three fish so far and there not very big. Do you think it would be a lot better of a filter system then? Plus I didn't mention that it came from a FW tank. The bio wheels are still wet with RO water. Would it hurt my salt fish if I would just throw the system on and run with it. Or do I need to do something different? Thanks for your help! <Cody, please reply with original query. I answer many queries weekly and I cannot remember what was said in your original query. I need to familiarize myself with such to answer. James (Salty Dog)> Cody> HOT Magnum ... air entrainment- 2/28/2006 First I have to say that I love your site and refer to it all the time (and usually find the info I am looking for). <Okay> That being said, I could not find any reference to the HOT Magnum releasing bursts of microbubbles. <Yikes... trouble> I spent ½ hr on the phone with Marineland and they didn't seem to be able to help me. <Unusual> Their suggestion was to 'burp' the canister. <Mmm, would help if this situation was temporary... that is, only bubbly for a short while (a day let's say at the most), but then all aire removed that was entrapped by opening...> To leave it running and lift it horizontal while making sure that the intake tube stays in the water. This is supposed to get the air out. <Will... if the amount of air is finite...> But it didn't seem to work. I still have bursts of microbubbles every 20 seconds or so. When I first set it up about 2 weeks ago, I don't recall seeing any bubbles. I have since pulled it apart for cleaning (lots of algae at the moment) and since then I get these bursts of bubbles. Any ideas? Thanks, Donna <Yes... there is an area where air is "getting in"... and this is dangerous... for a couple of reasons... Emphysematosis ("gas-bubble disease"... you can read re on WWM), and the possibility of leaking... Somewhere about the tubing, connections, the actual closure of the Magnum, there is a way for air to get in (and possibly water to get out...). A few ways can be employed to detect and fix this leak... First, hand-tightening all by twisting the tubing, hard fittings together may "do it"... Turning the unit off, and swiping a bit of toilet paper along the junctures (the intake lines outside the tank, the top/seal of the filter, and the discharge lines) may reveal a bit of water seeping out where the air is getting in... This/these should help... but do write back if you are still unable to find/fix the leak. Oh, and you did lube the o-ring? Bob Fenner> Equipment/Magnum Hang On - 03/05/06 Crew, <Donna> I wrote to you about a week ago about a problem I was having with my HOT Magnum filter and constant bursts of microbubbles. The advice I had gotten from Bob, I think, was to make sure I had lubed the o ring. I used up the whole little tube that they sent and it did seem to help for a while. Then it started up all over again. My tank glistened with bubbles all over everything. I finally solved it. I brought the Magnum back to the store and got a Rena Filstar XP1 (tank is only 30 gal). Best investment I ever made!!! My tank is cleaner and clearer then it ever was. I would have not given that filter a second thought because I had never heard of it except that I had read in a few places on this board about what a wonderful filter it is. Thank you for having such a great board with so much information. I finally have a tank that I can't take my eyes off of. <Thank you for the info Donna. I too have a Magnum HOT I no longer use for the same reason as yours. Marineland did mail me new gaskets, "O" rings etc, but didn't help any. Did mention that to them but have heard nothing from them. We are both stuck with them I guess. Too bad because they are a nice filter, easy to clean, etc, but tank isn't much to look at with microbubbles every five seconds.> Thanks again, <You're welcome. James (Salty Dog)> Donna> Filter media No more bio balls ! Fluval 404 added 1/20/06 Hello all ! <Hello Nemo> I recently dumped all my bio-balls knowing that it's a source for nitrate. It's been 3 days and I am constantly checking water chemistry, so far so good... and I also start using PolyFilter pad. <Good product> Nitrate has gone down now between 10-20 as I previously (before the bio-balls removal) have 20-40, I hope it continue to drop to 0, I'm doing 15 gal water change every 5 days using RO water. I have a 72 gal reef bow front ( been running for 9 mo.s now ),150 lb LR & 2 1/2" LS, I would say I started this hobby wrong , but thank God for finding your website and now I'm learning as I go along (wasted a LOTTTT! of money on wrong and useless equipment), anyway, back to the subject... <Yes, and keep on reading.> I also started dripping Kalkwasser every night using Kent's Aquadoser. One thing I've noticed and this is the 2nd day now, that my skimmer is not skimming at all, I have an Aqua C Urchin Pro which is producing a lot of dark waste,<Now or was?> not until I dump all my bio-balls, use PolyFilter pads and totally clean my sump. How often do you have to clean your sump? <As often as necessary. Leaving detritus/waste in the sump just adds to the nitrate level. If you have a place to put a filter pad in the sump, do this and clean/discard weekly.>I have a lot of detritus accumulating underneath it and it really bothers me, this is the 2nd time I cleaned this sump in my 9mos of running this system, Am I creating a problem here? I still don't know why my skimmer is not producing anything at all, I've been adjusting the collection cup back and forth and still nothing, what have I done wrong?! <The Polyfilter acts much like a skimmer removing organic waste, nitrates, phosphates, etc and I believe this is why your skimmer slowed down.> Also another subject/question, I recently bought a Fluval 404 for I don't really have a spot for placing any maintenance media anywhere in my small sump. My main reason for purchasing this is for me to maximize the use of carbons, Phosphate absorbers or maybe nitrite sponge etc... Now to my questions, Is this a good addition or just a waste of money? and another piece of equipment to clean ?<Its a worthwhile addition, as you say a place to put media especially if you do not have trays in the sump for such.> I can still return it to my LFS as it is still in the box. If this is a great addition, pls let me know on how to maximize its use, like where to place the carbons, phosphate absorber, sponges... is there a specific order in placing them? How about PolyFilter pads? <I'd forget the others for now, just use the Polyfilter.> is there anywhere in this unit where I can squeeze them in, how about replacing the filter pad on the side of this unit with the PolyFilter instead??? or cut them into small pieces and make them fit to the compartment??? <You need to put a filter pad in before the Polyfilter or you will shorten the life of the Poly. Yes, you can cut the Polyfilter into squares, I do the same now.> I really want to do this the right way before installing it, and also I don't like anything to "clash" as far as mixing this medias in just one container, it kinda reminds me of putting Gasoline, Alcohol, Kerosene in a cars tank and start the engine. Pls help and advise. Thank you in advance. Nemo1 P.S. Is there anything that I have to remove or add to this Fluval 404? also where should I place the intake and return ? should I just follow the manufacturers instruction book? <Yes> I don't know, It seems like I got so dependent on you guys that I don't trust the manufacturer that built this unit. <Don't believe I'd buy a component from a manufacturer I didn't trust. Do believe the manual until you have reason not to.> Have a great day...and may the force be to all of us !!! <Good luck. James (Salty Dog)> Canister filters 11/24/2005 Hi guys, was wondering if I should remove my Eheim canister filter at this point. I have a 75 gallon fish only live rock system with a TF 1000 skimmer hob refugium , and was going to remove my canister filter to make my tank more natural, my only concern is that I do not pre filter my tap water but the water seems to be fine, what do you think? <Canister filters are a good means of removing suspended detritus etc. and a good place to put filtering media such as Chemi-Pure and/or others. Filter pads will need to be cleaned/replaced weekly to help keep nitrate levels down. If you have a heavy fish load, I would recommend the use of your Eheim. I too do not filter my tap water as nitrates/phosphates are undetectable. I do aerate the tap water 24 hours to expel any Co2 before mixing the salt in. James (Salty Dog)> thanks Ron Fluval 403 Replacement Parts - 11/27/2005 You have a tremendous website. <Thank you. Tremendous work has gone into it.>? <I would contact the manufacturer for these; if they can't get them to you, they'll likely be able to tell you who can in your area. Try this site: .> Thank you. -Donald <Wishing you well, -Sabrina> Oops! Fluval 403 & Great website! De John - 11/28/05? Thank you. Donald <Hi Donald. This is a common problem that can be mitigated by keeping the o-rings lubricated, whether in use or not. I use silicone grease. For replacements, I would recommend calling the manufacturer - web searches came up dry for me also. Good luck, John> <<Try changing your search terms, I found this in about five seconds using "Fluval 403", then on the very first website I hit, I searched on "gasket", got this:. I'd give this place a call, they carry the O-rings for most other Fluvals, they might be able to get a hold of this one (should be able to). Marina>>> Canister Cleaning - 11/15/2005 Josh, <Tony> You say to rinse weekly ChemiPure, sponges and BioMax, how should I do this because I was told never to rinse or replace ChemiPure and sponges at the same time because of the beneficial bacteria? <BioMax monthly. The other two will be fine to rinse weekly. They are only for chemical/mechanical filtration.> Thanks, Tony <Welcome. - Josh> Chemipure Rinse - 11/15/2005 Josh, <Tony> You say to clean my ChemiPure weekly, do I have to rinse in tank water or does it matter? <Just use fresh. Doesn't matter.> Thanks, Tony <Welcome. - Josh> How do I set up a Jebo Odyssea CFS4 - no instructions were included 11/10/05 I recently bought a Jebo Odyssea CFS4 canister filter on EBay. When it arrived, there was no instruction booklet. There was a brief description of how to use it on the outside side panel of the box, which referred to the instruction manual. I found no instruction manual of any kind in the box. There is no contact information (address, website, or phone) on the box. I'd love to contact them as I'd like to find out how to set up, run, and maintain my canister filter. Or, alternately, is there a good source I could turn to for canister filter installation basics? Many thanks, Laura Schofield <Here's Jebo's site:. I'd contact them re... if this is taking too long, you can read through WWM's archives on canister filters. Bob Fenner> Re: How do I set up a Jebo Odyssea CFS4? 11/12/2005 Thanks Bob - you guys have compiled a goldmine of information. Unfortunately there is so much information. I've been in this hobby for almost 2 years now, and the more I learn, the more I realize that I have to learn!!! <Heeee! Lots of good help> I've been looking through your WWM archives on canister filters in general (don't see much mention of Jebo Odyssea - mainly Fluval or Eheim) and will try your suggestions on priming my filter. <Real good... should work... they're a newer co., brand... but about the same principles> In addition to writing to you, I also wrote to the EBay store I bought this from. They replied today that as this is a direct import from China, the instructions are in Chinese. They referred me to the brief instructions on the side of the box (which I've already followed - aside from pre-filling the filter with water, would you recommend this as well?). I plan on doing a water change tonight - this might be a good time to start my canister filter. <Yes> I'll let you know a bit about my system. It's a 40 gallon FOWLR, although my live rock is only a 5 lb piece, and its been decimated by my fish - maybe there's some life left inside though. My fish mostly started off as brackish, and now I have them in full salt water and they seem to be doing well - very nice color and no problems after about a year of full salt water conditions. I have: 3 mono sebae; 1 green scat; 1 yellow-tail blue damsel; 1 yellow tang; 2 peppermint shrimp. <<Neat mix! MH>> My substrate is 40 lbs of CaribSea aragonite, which I seeded with 5 lbs of live aragonite from my LFS. For filtration, I run a Skilter 250 (yes, I know its a dinosaur - but its all I could afford at the time). I currently have to change the filter pads about once every 3 weeks, so I hope the canister filter will help with this. <Should> Also, I do weekly 25% water changes and have a dozen mangroves in the tank for additional denitrification. <Neat!> <<Have you seen the lagoon/brackish display of the Long Beach Aquarium of the Pacific? Rather similar, really interesting display. MH>> I keep the SG at around 1.022 - and will now start adding calcium, iodine, strontium & molybdenum for the peppermint shrimp (recently moved from my reef nanocube to the big tank due to the disappearance of an ocellaris clown, I now think the ocellaris died of causes other than being hunted by anything, and my peppermint shrimp ate him very quickly - no trace of him whatsoever - gone in less than a day!) <Happens... the shrimp didn't kill the Clown though...> Anyways, I put the peppermints in my big tank as a knee-jerk reaction and don't fancy trying to catch them, so here they'll stay. After I get the canister filter going, I want to add a UV sterilizer that I bought last spring. Needs to go after a canister filter though, as solid particles in the water would break it apparently. I know my current setup leaves a lot to be desired, but it is a work in progress. I did weekly water tests for several months, but am now a bit lazy on this and only test about monthly. The parameters are always excellent though, with ammonia, nitrites and nitrates always at zero. Do you have any criticisms/suggestions for me? Your advice is greatly appreciated. Many thanks. <Mmm, no advice... but to keep on keeping on! Bob Fenner> Fluval 404 - 101 11/7/05 Hi, <Howdy> I have a Fluval 404 that I got with an entire tank set up and an orphaned turtle that a couple left behind when they skipped town. <Scumbies, or at least an element of scumbaggish behavior> I can get the water flowing only when I use the self primer, the pump doesn't seem to be working. I cleaned everything using the manual for instruction. <Not an uncommon problem...> Does the impeller spin 360 degrees? <Yep> I seems to be built to turn back and forth versus spinning like a propeller. <...> Do I have to purchase a new unit? <Not likely... just have to "clear" the air in the present one> Will the turtle (Pig Nose turtle - water species) be OK with no filter for a while? <Not a good idea... stinky...> Please Help! Mark <Please read here re getting the air out of the Fluval: and the linked files above. There are a few techniques to "starting" (i.e. filling) these canister filters with water... and yours may run better being lowered... below the tank... at least to exclude air initially. Once you do it, you won't forget how. Cheers, Bob Fenner> Re: Fluval 404 11/9/05 Bob, <Mark> I finally got it going by filling the input hose with water, this enable the water to "rush" into the canister. <Ah, good!> Thanks for responding to my questions......... and the turtle is really cool!!! <Great. Life to you. Bob Fenner> Filter question 10/6/05 Hello, <Hello Anne.> Great site...I love reading it and getting educated! <As you well know> My question is...........I'm running a Fluval 304 in my 55 gallon tank. In the tank I have 50lbs of rock,<live rock?> about a 2 to 3 inch sand bed, protein skimmer, and 2 powerheads. Do you think I really need to have the Fluval? One less piece of plastic in the tank would definitely be a plus. Right now my parameters are good........amm 0, nitrites 0, nitrates 0, pH8.2-8.4, salinity 1.026, alk 8dkh and calcium 520 (is this a little high?) <I like to see dKH readings of 8-12 and calcium levels 375-425. Canister filters are nice for removing detritus etc from the tank. To avoid nitrate problems they need to be cleaned weekly, but they are not absolutely necessary. During weekly water changes siphoning the sand removes much detritus. I do this weekly and it is hard to believe how brown the water is after only one week. You need to use a gravel cleaner type siphon to keep from sucking the sand/gravel out. An easy one to maintain is the Marineland Magnum hang on the back power filter...no hoses or anything to bother with. James (Salty Dog)> Thank you, Anne Canfield> Resun Cy-280U Canister Filter 8/13/05 Hi <Hello> Just found your site & thought I would ask a stupid question. <... not stupid... but you mis-spelled the name of the manufacturer... it's Resun... as in here:> We have recently bought the above canister filter but are having trouble of water leaking all over the place when we try to disconnect to clean. Do you know this model at all & are we doing something fundamentally wrong when we disconnect it. <Mmm, don't know the specific line, though saw it at the InterZoo trade show in 04...> We turn off the power turn the valve on the tube to off. Then this is where I think we go wrong, we unscrew the Tube & attachments from the lid. <Don't need to do this... just take the top off... with the lines in place> and this is when there is water all over the place. Should we just unclip the top of the canister instead??? <Ahh, yes... though, due to poor design, there is still going to be some water (between the valves and the top) that overflows onto... a towel or bucket that you might place under the unit. Cheers, Bob Fenner> regards Kerrie Fluval 404 bubbles 7/31/05 I recently purchased a NEW Fluval 404 on 6/27/05, I fought with it for 1 month adding new hoses, O rings, etc. even reshaping the hoses, and getting down to shaking the canister on a regular basis to release air. My problem is that every 30 min to 1 hour My Fluval release a large amount of micro bubbles into the tank. I have tried everything that HAGEN has asked me to do. I am running the Fluval only no airstones or any thing else in the tank. It is a salt water new set up (2mos old) 40gl with live sand and rock. I returned the Fluval and the store replaced it with a new one. It has been running for 2 days. and I still get a rush of micro bubbles. Question: is there a flaw in this design. or am I getting a bad batch? <Likely there is a simple, poor/incomplete connection twixt a hard plumbing fitting and the flexible hose on the intake side...> Do you know of any other filter similar to the Fluval 404 that I can use that will no create this problem? <... Please read here:> I really do not want to use the Fluval anymore if the bubbles are always going to be there. PLEASE ADVISE Pam <... as you will find, there are techniques for determining the source of entrained air... I would turn off, drain the unit, disassemble the intake side line... re-fit with a bit of silicone grease, making sure the flexible tubing is pushed on entirely over the hard plastic fittings. Bob Fenner> Rena filter improvements 7/24/05 I just wanted to write to say thank you for suggesting I get rid of my Penguin 170 and upgrade to a larger filter. I have a 37 gallon salt water system with 2 clowns, 1 six-line wrasse, 1 skunk cleaner shrimp, 3 snails, and 3 crabs. I decided to upgrade to the Rena xp3. I really liked how easy the instructions were to follow. I was able to get it setup in less than an hour. I however have three minor things to gripe about. First I wish the holes in the spray bar were drilled to a larger diameter. <You can enlarge them... best with a Dremel tool> When I went to test the system the spray bar made a constant whining noise. I went ahead a drilled the holes a little wider and it fixed the problem. Second I wish the tubes that came with the filter were more flexible or a little bit wider. I had the hardest time putting them on the pipe assemblies even after warming them up in hot water. The last thing was the impeller. After turning on the system the impeller made some rattling noise. <Likely air caught about it... best to turn off, wait a few seconds, turn back on... to clear> I've read some reviews from people saying these filters are noisy. I now know why they write this. The impeller was not fully seated. I do hope the upgrade I made will help a little with my algae problem. The only thing I have left to do with my tank is smash my SeaClone 100 with a sledge hammer. <Heeeee!> Thank you for having such a great site and for writing a great reference book. Chris Swanson <Thank you for writing... and do consider penning a brief note to Rena, their distributor re your concerns... does help.> Live Rock in Canister Hello...I Googled my question and I only found one instance when you talked about it, that was a couple of days ago and I forget why I thought this, but I didn't think it applied to me. Anyway my canister filter (magnum 350) has been building up nitrates, so I turned it off and this weekend I am going to take out the filtration and possibly just use it as another power head and occasionally pop in some carbon or diatomaceous earth. <Any filter will cause nitrate buildup if the pad/cartridge isn't cleaned or replaced on a weekly basis.> Would putting live rock in there be a good idea? <I'd add more to the tank itself rather than the filter.>just a few small pieces? And also have you ever seen those mini skimmers that like attach to your canister intake? ...was wondering if that was a good idea too....appreciate any answers. <The nice thing about skimmers is that they collect from the surface where most of the nasties are found. Yes, good idea. James (Salty Dog)> thanks...Adam>>
https://www.wetwebmedia.com/marcanfltfaq3.htm
CC-MAIN-2021-31
refinedweb
5,980
73.17
import os from lightning import Lightning from numpy import random, asarray, linspace, corrcoef from colorsys import hsv_to_rgb from sklearn import datasets import networkx as nx lgn = Lightning(ipython=True, host='') Connected to server at A random graph where every node has the same degree, with default styling. This and other graph plots take matrices (e.g. adjacency matrices) as inputs. We use the networkx library to generate the graph, then get its adjancency matrix. Double click on a point to see its neighbors (all should have 3). Drag points to try and move the graph, it should remain tight because of the degree structure. n = 100 G = nx.random_regular_graph(3,n) mat = nx.adjacency_matrix(G).todense() lgn.force(mat) The previous network was binary (all links either 0 or 1). Here the links are weighted, which is reflected in the line widths. G = nx.random_regular_graph(3,100) mat = asarray(nx.adjacency_matrix(G).todense()) * (((random.rand(100,100))*5) ** 2) lgn.force(mat) The lobster graph, a backbone with some leaves, is colored here by node ordering. We also set all nodes with degree less than 2 to gray. G = nx.random_lobster(60, 0.5, 0.0, seed=44) mat = nx.adjacency_matrix(G).todense() n = len(G.nodes()) c = [list(asarray(hsv_to_rgb(float(x) / n , 0.8, 1.0))*255) if y > 1 else [150,150,150] for (x,y) in G.degree_iter()] lgn.force(mat, color=c) Here we color points (and also change their size) to indicate their degree. Click to confirm bigger points have more connections. G = nx.random_geometric_graph(50, 0.1) mat = nx.adjacency_matrix(G).todense() g = G.degree().values() lgn.force(mat, group=g, size=(asarray(g) + 1.5)*3) Graphs can be a useful way to look at data that doesn't neccessarily come from a graph. Here we create a graph from a thresholded correlation matrix on data drawn from a set of clusters. The cluster identities are clear in the resulting graph. d, g = datasets.make_blobs(n_features=5, n_samples=50, centers=5, cluster_std=2.0, random_state=100) c = corrcoef(d) c[c<0.9] = 0 lgn.force(c, group=g)
https://nbviewer.jupyter.org/github/lightning-viz/lightning-example-notebooks/blob/master/plots/force.ipynb
CC-MAIN-2021-10
refinedweb
357
62.04
This action might not be possible to undo. Are you sure you want to continue? A Comparative Analysis of ULIP of Bajaj Allianz Life Insurance Co. Ltd with Mutual Fund MINI PROJECT REPORT Submitted by RAJEEV JOSEPH REG.NO:08BA020 1st Year MBA KARUNYA UNIVERSITY Under the guidance of Ms. P.M. ANUSHIA LECTURER KARUNYA SCHOOL OF MANAGEMENT KARUNYA UNIVERSITY COIMBATORE – 641114 2008-2010 DECLARATION I, Rajeev Joseph, do hereby declare that this project work entitled “A Comparative Analysis of ULIP of Bajaj Allianz Life Insurance Co. Ltd with Mutual Fund” is an outcome of my study and is submitted in partial fulfillment of the requirement for the award of the degree of Master of Business Administration, Karunya University. I also declare that this report has not been submitted by me fully or partially for the award of any degree, diploma, title, recognition or any other fellowship of any other university before. Place: Changanacherry Date: 21-06-2009 RAJEEV JOSEPH ACKNOWLEDGEMENT Initially, let me thank the almighty God for guiding me all through the project work. I express my deep and sincere gratitude to Ms. P.M. Anushia, Faculty guide for providing the necessary assistance for the project. I sincerely acknowledge my gratitude to Mr. Justin Paul, Branch Manager of Bajaj Allianz Life Insurance Company Ltd, Changanacherry branch and Mr. Biju Sebastian ,Sales Manager opportunity to do this project. I also owe my sincere thanks to all the staff in Bajaj Allianz Life Insurance Company Ltd, Changanacherry branch, and the faculties of the Department of Business Administration, KARUNYA UNIVERSITY for their valuable guidance and suggestion in the preparation of this report and completing the same successfully. for giving me an CHAPTER 1 CONTENT Executive Summary Introduction Objectives Limitation PAGE No: 1 2 3 3 4 11 15 22 41 71 73 74 78 2 3 Indian Insurance Industry Industry Profile Unit Linked Insurance Policy (ULIP) Mutual Fund 4 Data Interpretation and Analysis Findings and Suggestion Conclusion and Recommendations Bibliography 5 Annexture Another option for investment available is Mutual Fund. Mutual fund. Customers always prefer to invest in a good option and in a company which is market leader. most of the insurance companies are providing more value – added services with the basic insurance operation. So while investing people tend more to words mutual fund as they are providing more returns than Insurance also. etc. Now there are many options available for investment like life Insurance. Total Investment scenario is changing. while investing in any Investment option investor checks whether his money is safe or not. After survey and analysis I came to know that most of the people go for ULIP insurance policies to cover the risk of life. Therefore. Mutual funds provides good returns but investments are directly exposed to risk. Coimbatore. Real estate. in past people were not interested in investment because there were no good options available for investment. Mutual Funds are providing good returns. student(MBA) of Karunya University. Today people want more services and more return on their investment. And people are aware about the tax benefits they get for insurance policies. Equity market. So. The project was taken to know about. and invest it in a good Portfolio but there is big portion of customers have taken the policies to save the taxes.Which gives it highest business and customers. As in ULIP returns are related to stock market but they are having some insurance benefit and IRDA regulates the investment. what are the main aspects in Bajaj Allianz Life Insurance Company. Mutual fund companies are providing more liquidity. and its USP (Unique Selling Preposition). with a good investment portfolio. .EXECUTIVE SUMMARY “A comparative Analysis of ULIP plans of Bajaj Allianz Life Insurance with mutual funds in Changanacherry Branch” an analysis to be done be by Rajeev Joseph. The Above problem arises because every life insurance company has their products having different positive and negative aspects. Life Insurance is booming sector in today’s economy. Ltd. INTRODUCTION To make comparison of ULIP plans with Mutual funds in Bajaj Allianz Life Insurance Co.Many people are getting the tax benefits in ULIP. debentures and other securities. The income earned through these investments and the capital appreciations realized are shared by its unit holders in . A Mutual Fund is a trust that pools the savings of a number of investors who share a common financial goal. which gives the right way to save the money and earn good profit by invested premium. By doing this type of study in this Insurance sector and looking at the vast scope and opportunity to study this booming field of Life Insurance and the growing awareness among the public regarding insuring their life through Life insurance policies as well as the growing contribution of Insurance in GDP of country with the number of private players making entrance in this booming industry of Insurance. and to Create awareness about Unit Linked Insurance Plan (ULIP) Benefits. Because in past people were taking insurance policies for protection tool only. The money thus collected is then invested in capital market instruments such as shares. So this insurance company is providing more value – added services with the basic insurance operation. In present scenario insurance sector is providing more services with the basic life insurance. Bajaj Allianz Life Insurance has number of products. Today people want more services and more return on their investment. In Mutual Fund they have to invest their money in tax saving funds to get the tax benefit. So the responsibilities of the insurance companies have been increased as compare to the past. The overall goal of this project was to create awareness about investments. professionally managed basket of securities at a relatively low cost.proportion to the number of units owned by them. • • • . • To find the significance difference between customers of different income with that of investment mode. Narrow minded thinking of middle class people as investment is not their cup of tea. OBJECTIVES • To understand the reason for which customers prefer ULIP as one of the best insurance investment mode rather than Mutual fund. LIMITATIONS • The middle class people do not know basic concept of ULIP so creating awareness is a big challenge for me. The findings of my research is from a small sample size. As ULIP and Mutual fund both are related to share market. Thus a Mutual Fund is the most suitable investment for the common man as it offers an opportunity to invest in a diversified. • To Compare Investment Options of customers in ULIPs and Mutual Funds. Many customers are thinking that investment in share market is very risky. The Bombay Mutual Life Insurance Society started its business in 1870. insurance remained an urban phenomenon.• • A general preference to LIC and SBI over private players. INDIAN INSURANCE INDUSTRY The history of life insurance in India dates back to 1818 when it was conceived as a means to provide for English Widows. on the other hand. The General insurance business in India. Indian companies strengthened their hold on this business but despite the growth that was witnessed. By 1938 there were 176 insurance companies. Interestingly in those days a higher premium was charged for Indian lives than the non-Indian lives as Indian lives were considered more riskier for coverage. Nationalization was justified on the grounds that it would create much needed funds for rapid industrialization. brought together over 240 private life insurers and provident societies under one nationalized monopoly corporation and Life Insurance Corporation (LIC) was born. the first general insurance company established in the year 1850 in Calcutta by the British. This was in conformity with the Government's chosen path of State lead planning and development. can trace its roots to the Triton (Tital) Insurance Company Limited. Several frauds during 20's and 30's sullied insurance business in India. Hesitations on the part of respondents to disclose financial information. The first comprehensive legislation was introduced with the Insurance Act of 1938 that provided strict State Control over insurance business. It was the first company to charge same premium for both Indian and non-Indian lives.The (non-life) insurance . The insurance business grew at a faster pace after independence. The Oriental Assurance Company was established in 1880. Till the end of nineteenth century insurance business was almost entirely in the hands of overseas companies.Insurance regulation formally began in India with the passing of the Life Insurance Companies Act of 1912 and the provident fund Act of 1912. The Government of India in 1956. 1938: Earlier legislation consolidated and amended to by the Insurance Act with the objective of protecting the interests of the insuring public.business continued to thrive with the private sector till 1972. . 1928 : :The Indian Insurance Companies Act enacted to enable the government to collect statistical information about both life and non-life insurance businesses. nearly 107 insurers were amalgamated and grouped into four companies. National Insurance Company Limited. 1972. and United India Insurance Company Limited. New India Assurance Company. Some of the important milestones in the life insurance business in India are: 1850: Non life insurance debuts with triton insurance company. 1870: :Bombay mutual life assurance society is the first Indian owned life insurer 1912: The Indian Life Assurance Companies Act enacted as the first statute to regulate the life insurance business. New India Assurance Company Limited. The general insurance industry was nationalized in 1972.The general insurance business was nationalized after the promulgation of General Insurance Business (Nationalizations) Act. The post-nationalization general insurance business was undertaken by the General Insurance Corporation of India (GIC) and its 4 subsidiaries: Oriental Insurance Company Limited. Their operations were restricted to organized trade and industry in large cities. OrientalInsurance Company and United India Insurance Company.National Insurance Company. These were subsidiaries of the General Insurance Company (GIC). With this. on the other hand. 1972 : The General Insurance Business (Nationalization) Act. viz. the first general insurance company established in the year 1850 in Calcutta by the British. the New India Assurance Company Ltd. 1957 : General Insurance Council.. a wing of the Insurance Association of India. can trace its roots to the Triton Insurance Company Ltd.. 1956. and the United India Insurance Company Ltd. 1968 : The Insurance Act amended to regulate investments and set minimum solvency margins and the Tariff Advisory Committee set up. the National Insurance Company Ltd. LIC Act.. the first company to transact all classes of general insurance of India. the Oriental Insurance Company Ltd. set up. . with a capital contribution of Rs.1956: 245 Indian and foreign insurers and provident societies taken over by the central government and nationalized. frames a code of conduct for ensuring fair conduct and sound business practices. The General insurance business in India. Some of the important milestones in the general insurance business in India are: 1907: The Indian Mercantile Insurance Ltd. GIC incorporated as a company. LIC formed by an Act of Parliament. 1972 nationalized the general insurance business in India with effect from 1st January 1973. 5 Crore from the Government of India. 107 insurers amalgamated and grouped into four companies’ viz. was formed to evaluate the Indian insurance industry and recommend its future direction. UK . USA Aviva International Holdings Ltd.1993: Malhotra Committee. South Africa Sun Life. Om Kotak Life Insurance Co.. Ltd.ICICI potential and HDFC standard Life insurance are the first private insurers to sell a policy. Ltd. Birla Sun Life Insurance Co. The insurance market have witnessed dynamic changes which includes presence of a fairly large number of insurers both life and non-life segment. ING Vysya Life Insurance Co. Ltd. The Malhotra committee was set up with the objective of complementing the reforms initiated in the financial sector. Standard Life Assurance. AVIVA Sahara Life Insurance Co. Reliance Life Insurance Co. For years now. Most of the private insurance companies have formed joint venture partnering well recognized foreign players across the globe.V. Ltd. Ltd. LIFE INSURANCE COMPANIES Sl. Canada American International Assurance Co. Ltd. 2001: Royal Sundaram Alliance first non life insurer to sell a policy 2002 Banks allowed to sell insurance plans. Ltd.N. 2000: IRDA starts giving licenses to private insurers:Kotak Life Insurance . Ltd. No. USA BNP Paribas Assurance SA. Tata-AIG Life Insurance Co. Foreign Partners Standard Life Assurance.. the private players are active in the liberalized environment. UK Old Mutual. USA Prudential . Ltd. INSURANCE MARKET –PRESENT The insurance sector was opened up for private participation seven years ago. Allianz Bajaj Life Insurance Co. Metlife India Insurance Co. SBI Life Insurance Co. Netherlands Allianz. Ltd.. 1 2 3 4 5 6 7 8 9 10 11 12 13 Insurer HDFC Standard Life Insurance Co. UK New York Life. Germany Metlife International Holdings Ltd. Ltd. France ING Insurance International B. UK ICICI-Prudential Life Insurance Co.. Malhotra. 1997 : Insurance regulator IRDA set up.headed by former Finance Secretary and RBI Governor R. New premium collection for the company was Rs 4. Bajaj Allianz Life Insurance Co Ltd has reported a growth of 52% and its market share went up to 6. Netherlands HSBC.680 crore in FY2007-08. UK Religare. an increase of 87% over last year. Ltd. Ltd. France Pantaloon Retail Ltd. the company ranked 6th in 2007-08. has dropped from 74% a year before. Birla Sun Life Insurance Co Ltd market share of the company increased from 1. .11% in 2007-08.88% and it ranks 6 th among the insurance companies and 5th amongst the private players. South Africa AXA Holdings. It experienced growth of 58% in new business premium. Life Insurance Corporation of India Sanlam. Bharti AXA Life Insurance Co.93% in 2007-08 from 6.66 crore in 200708. HDFC Standard Life Insurance Co Ltd with an income of Rs 2. DLF Pramerica Life Insurance Co. registering a year-on-year growth of 64%. Ltd..792.22% to 2. It now ranks 5th in new business premium and 4th in number of new policies sold in 2007-08. mainly owing to entry of private players with innovative products and better sales force.97% in 2006-07.14 15 16 17 18 19 20 21 Shriram Life Insurance Co. with total market share of 7. USA TOP 10 LIFE INSURANCE COMPANIES IN INDIA LIC (Life Insurance Corporation of India) still remains the largest life insurance company accounting for 64% market share.76 crore and its market share went up to 2.23% a year back. however. Its market share is 2. Netherlands Prudential of America. Generali. (SMNPL).98% in 2007-08 form 5. Future Generali India Life Insurance Company Ltd IDBI Fortis Life Insurance Company Ltd. Ltd. SBI Life Insurance Co Ltd in terms of new number of policies sold.792. accounting for increase in market share to 8.66% in 2006-07. Aegon Religare Life Insurance Company Ltd. The company ranked second (after LIC) in number of policies sold in 2007-08. ICICI Prudential Life Insurance Co Ltd is the biggest private life insurance company in India. Italy Fortis.36%.96% from 1. Reliance Life Insurance Co Ltd Total collected was Rs 2. Sain Marketing Network Pvt. Its share. Canara HSBC OBC Life Insurance Company Ltd. It has presence in more than 3.4% 2.4% 1.4% 3.1% 13. Aviva Life Insurance Company India Ltd ranking dropped to 10th in 2007-08 from 9th last year. moving from the 11th position to 9th. Aviva Life Insurance plans to increase its capital base by Rs 344 crore. Total new business generated was Rs 641. MARKET SHARE OF VARIOUS LIFE INSURANCE COMPANIES IN INDIA Here is the market share of various Life Insurance Companies in India at the end of FY2008. Kotak Mahindra Old Mutual Life Insurance Ltd the fiscal 2007-08. the company reported growth of 80%. Company Name LIC ICICI Prudential Bajaj Allianz SBI Life HDFC Standard Birla Sunlife Reliance Life Max New York OM Kotak AVIVA Tata AIG MetLife ING Vysya Market Share (in %) 48.9% 1.19% in 2007-08.5% 1. It captured a market share of 1.2% .8% 1.83 crore as against Rs 387.4% 1.1% 3.3% 6.51 crore.2% 4.000 locations across India via 221 branches and close to 40 banc assurance partnerships.Max New York Life Insurance Co Ltd has reported growth of 73% in 2007-08.7% 10. • Health insurance is poised to become the second largest business for non-life insurers after motor insurance in next three years. Moreover.2% BOOMING INSURANCE MARKET IN INDIA With a huge population base and large untapped market. with new players significantly enhancing product awareness and promoting consumer education and information. • Home insurance segment is set to achieve a 100% growth as financial institutions have made home insurance obligatory for housing loan approvals.000 Crore by 2010-11. Total life insurance premium in India is projected to grow Rs 1. Based on this analysis. • With the entry of several low-cost airlines.Shriram Life Bharti Axa Life 0. The strong growth potential of the country has also made international players to look at the Indian insurance market. the report gives a future forecast of the market that is intended as a rough guide to the direction in which the market is likely to move. the Indian aviation insurance market is all set to boom in a big way in coming years. saturation of insurance markets in many developed economies has made the Indian market more attractive for international insurance players This research report will help the client to analyze the leading-edge opportunities critical to the success of insurance industry in India. This impressive growth in the market has been driven by liberalization. along with fleet expansion by existing ones and increasing corporate aircraft ownership. .3% 0. • Total non-life insurance premium is expected to increase at a CAGR of 25% for the period spanning from 2008-09 to 2010-11.230. insurance industry is a big opportunity area in India for national as well as foreign investors. India is the fifth largest life insurance market in the emerging insurance economies globally and is growing at 3234% annually. managing assets worth over a Trillion(Over INR 55.• A booming life insurance market has propelled the Indian life insurance agents into the ‘top 10 country list’ in terms of membership to the Million Dollar Round Table (MDRT) — an exclusive club for the highest performing life insurance agents. customer focused market leader. 7 cr.86. Allianz SE is a leading insurance conglomerate globally and one of the largest asset managers in the world. customer delight is the guiding principle. supported by the best technology.37 1.443 New Business in FY Rs. one of the largest Insurance Company and Bajaj Finserv. . VISION To be the first choice insurer for customers To be the preferred employer for staff in the insurance industry.000 Crores). Rs.965 1. 180 cr.15. 63.Allianz SE has over 115 years of financial experience and is present in over 70 countries around the world. At Bajaj Allianz Life Insurance. of policies sold 21. MISSION As a responsible. Accelerated Growth Fiscal Year 2001-2002(6 mths) 2002-2003 2003-2004 No. Rs. To be the number one insurer for creating shareholder value.00. we will strive to understand the insurance needs of the consumers and translate it into affordable products that deliver value for money.3 cr. Their business philosophy is to ensure excellent insurance and investment solutions by offering customized products. COMPANY PROFILE Bajaj Allianz Life Insurance is a union between Allianz SE. RESULTS FOR CURRENT FY TILL 31ST DECEMBER 2008 The Gross Written Premiums (GWP) for the nine months ended on 31st Dec 2008.100 crore mark in profit before tax in the last two years. Operating expenses came down to 20% of GWP for the current period of nine months ended on 31st Dec 2008 as compared to 26% for the corresponding period of previous year. The policyholder surplus is Rs 15514 lacs (corresponding period of previous year Rs 18681 . The Company posted a profit of Rs 364 lacs for the period ended 31st Dec 2008 as compared to a profit of Rs 5358 lacs in the corresponding period of the previous year.2004-2005 2. 6. The Company has an authorized and paid up capital of Rs 110 crores. 2005-2006 7. which is also the highest by any private insurer. As on 31st March 2009. The company ranked second (after LIC) in number of policies sold in 2007-08. 2006-2007 20. 857 cr.685 Rs.36%. is Rs 6726 crores as compared to Rs 5219 crores in the corresponding period of the previous year . 2001 to conduct General Insurance business (including Health Insurance business) in India.81.302 cr.674 cr. which was 27% during nine months ended on 31st Dec 2007. 3003 crores as compared to Rs. Commission on new business premium. SE.217 Rs. came down to 20% during the current period.79. with total market share of 7. 2007-2008 37. New Business premium for the nine months ended on 31st Dec 2008 is Rs.189 Rs. Bajaj Allianz General Insurance maintained its premier position in the industry by achieving growth as well as profitability. The company garnered a premium income of Rs. Bajaj Allianz has made a profit before tax of Rs.717 cr. achieving a growth of 11 % over the last year. 149. Bajaj Allianz General Insurance received the Insurance Regulatory and Development Authority (IRDA) certificate of Registration on 2nd May.growth of 29%. 2.88. Bajaj Finserv Limited holds 74% and the remaining 26% is held by Allianz.44. 2866 crore. The profit after tax was Rs. 4.8 crore and has become the only private insurer to cross the Rs. 3780 crores in the corresponding period of previous year.742 Rs.95 crores. 496). The company ranked second (after LIC) in number of policies sold in 2007-08.36%. During the period ended 31st Dec 2008.764 staff at 31st March 2008. the Company employed on roll 22.62.495 (corresponding period of the previous year 23. As on 31st Dec 2008.129 staff as against 20. the company has been able to not only reduce commission but also operating expenses. The share capital (including share premium) is Rs. The solvency margin of the company continues to be very strong. no additional capital has been infused.lacs) and the shareholders’ loss stands at Rs 15150 lacs (corresponding period of previous year: Rs 13323 lacs). Despite challenging environment. The solvency as on 31 st Dec 2008 stands at 261% (required solvency is 150%). with total market share of 7. 1211 crores as on 31st December 2008. Policies in force as on 31 st Dec 2008 is around 70 lacs. Number of policies underwritten during the nine months ended 31st Dec 2008 were 18.138 offices as on 31 Dec 2008.The Company operates out of 1.08. . PRODUCTS PROFILE Unit Linked Plan • New family gain • New unit gain plus • New unit gain premier Traditional plan • Invest gain • Cash gain • Child gain Retirement Solutions • Swarna visranthi • New unit gain easy pension plus Health Plan • Care first • Health care Term Plan • Risk care • Term care . UNIT LINKED INSURANCE POLICY (ULIP) UNIT LINKED INSURANCE POLICY (ULIP) A unit linked insurance policy is one in which the customer is provided with a life insurance cover and the premium paid is invested in either debt or equity products or a . However. the chances of earning a decent return are high.00. However.50. the risk charge (mortality rate) varies with age. The payment period too can be regular or variable. the charges paid in these schemes in terms of the entry load. Every insurance company has four to five ULIPs with varying investment options. In other words. administrative fees. there are some schemes in which the policyholder receives the sum assured plus the value of the investments. Just as in the case of mutual funds. 1.000 and Proceeds from ULIPs are tax-free under section 10(10D) unlike those from a mutual fund which attract short term capital gains tax.As in all insurance policies. in that sense. his nominees would normally receive an amount that is the higher of the sum assured (insurance cover) or the value of the units (investments). Tax benefits The premiums paid for ULIPs are eligible for tax rebates under section 80 which allows a a maximum of Rs.000 premiums paid for taxable income below Rs 8. The risk cover (insurance cover) can be increased or decreased. for an individual the risk . underwriting fees. schemes have been tailored to suit different customer profiles and. buying and selling charges and asset management charges are fairly high and vary from insurer to insurer in the quantum as also in the manner in which they are charged. buyers who are risk averse can buy into debt schemes while those who have an appetite for risk can opt for balanced or equity schemes. regular or variable.combination of the two. Key features Premiums paid can be single. it enables the buyer to secure some protection for his family in the event of his untimely death and at the same time provides him an opportunity to earn a return on his premium paid. In the event of the insured person's untimely death. Moreover. offer a great deal of choice.However. The advantage of ULIP is that since the investments are made for long periods. charges and conditions for withdrawals and surrender. high premiums attract lower charges and vice versa. a minimum of 15 per cent. The maturity benefit is not typically a fixed amount and the maturity period can be advanced (early withdrawal) or extended. Subsequent charges: Usually lower than first-year charges. the sum at risk (sum assured less the value of investments) reduces and so the risk charges are lower. However. The policyholder can switch between schemes (for instance. The investment risk is transferred to the policyholder. Fund management expenses and the brokerage are built into the daily net asset value. and around 1. The value would be high or low depending on the market conditions during the period of the policy and the performance of the fund manager. part equity). Asset management fees: Fund management charges vary from 0. However. Charges can be as high as 70 per cent if the scheme affords a lot of flexibility. There could be policies that allow the policyholder to remain invested beyond the maturity period in the event of the maturity value not being satisfactory.6 per cent to 0. balanced to debt or gilt to equity). balanced funds (part debt. growth funds (equities) or bonds (corporate bonds). For instance.5 per cent for an equity-oriented scheme. POINTS TO REMEMBER ABOUT ULIP First-year charges: Usually. if there is an increase in the value of units due to market conditions. These charges are normally deducted on a monthly basis from the unit value. Investments can be made in gilt funds (government securities).The maturity benefit is the net asset value of the units.charge is always based on the age of the policyholder in the year of commencement of the policy. some insurers charge higher fees in the initial years and lower them significantly in the subsequent years. . Risk charges: The charges are broadly comparable across insurers.75 per cent for a money market fund. Administration charges: This ranges between Rs 15 per month to Rs 60 per month and is levied by cancellation of units and also depends on the nature of the scheme. money-market funds. Thus there is no capital protection on maturity unless the scheme specially provides for it. This can only give you some indication about the credibility of the fund manager because past performance is no guarantee to future returns. your pension contribution will get deducted from your taxable income. Some insurers don't charge anything. Surrender value of units: Insurers levy certain charges if the policy is surrendered prematurely. In other words. it is important not to go only by the features or the cost advantages of schemes but by the parentage of the insurer as well. Since insurance is a product. 60 per cent in the second year. the higher the initial years' expenses the longer it takes for the policy to outperform its peers with low initial years' costs and slightly higher subsequent year expenses. But there are certain advantages in joining a pension plan.Switching charges: Some insurers allow four free switches in every year but link it to a minimum amount. Top-ups: Usually attracts 1 per cent of the top-up amount. This levy varies between insurers and could be around 75 per cent in the first year. . Others allow just one free switch in each year and charge Rs 100 for every subsequent switch.000 is eligible for tax deduction under section 80CCC. contribution to pension funds upto Rs 10. Fund performance: You could check out the performance of similar schemes (balanced with balanced. 40 per cent in the third year and nil after the fourth year. Top-up normally goes directly into your investment account (units) unless you specifically ask for an increase in the risk cover. Comparing schemes based on costs is a fairly complex exercise. Look at NAV performance over a period of at least two to three years. Retire unhurt Pension plans are essentially tailored to meet old age financial requirements. First of all. As a rule. which entails a long-term commitment on the part of the insurer. especially in insurance products where the emphasis is on long-term performance (10 years or more). equity with equity) across insurance companies. Usually. Besides. units equivalent to the charges are deducted from his portfolio. the exact amount of premium being Rs 50.both conventional and unit-linked. .000 and 2 per cent for the remaining Rs 30. In both cases you pay a certain premium amount for a specified length of time. then your tax savings will be that much. For the remaining. in a ULIP you receive the value of the investment in your individual account. For the remaining nine years a 3. the sum assured works out to Rs 532.000 and 3 per cent for the remaining Rs 30. Then. He wants to invest Rs 50.5 per cent (equity) and brokerage are also charged.032. Based on this premium. You can choose to pay the premium for five to 30 years. an administration charge (7 per cent for the first Rs 20. This cost is built into the calculation of net asset value.000. The charges in the first year include a 14 per cent sales charge. While in a conventional scheme. you receive onethird of the value of the accumulated amount as a lump-sum payment. the minimum entry age is 18 years and the maximum age is 60 years. liable to pay to a 30. your money is managed through the insurer's pooled investment account and you are entitled to bonuses every year. you can buy annuities either from the existing insurer or any other insurer. All life insurance companies offer pension products .000) are levied in addition to mortality charges.000 a year for 10 years in an equity-based scheme. Fund management fee of 1.So if you are in the top tax bracket.000) and underwriting charges. he is allotted units in the scheme. Based on the current NAV of the plan that Sara chooses to invest in. When the policy matures. HOW DOES ULIP WORK Sara is a thirty-year old who wants a product that will give him market-linked returns as well as a life cover. In a ULIP you have the flexibility to choose between a conservative scheme or an aggressive scheme with high allocation to equities.6 per cent tax. mortality charges or the charges for the life cover are also deducted. which are deducted monthly.5 per cent sales charge and an administrative charge of 4 per cent (for the first Rs 20. Pension policy imposes huge penalties for early termination. 24.that is. after 10 years .400. the value of units at the end of the ninth year would be Rs 621.200. the value of investment would be Rs 510. assuming the growth rate in the market value of the units to be 10 per cent.On maturity .000 or the market value of the units whichever is higher. Sara would receive Rs 7.Sara would receive the sum assured of Rs 532. RISKS ASSOCIATED WITH ULIPS ULIPS as the name suggests are directly linked with the investments made by the insured. the beneficiaries would get Rs 621.000 or the market value of the units whichever is higher. Assuming a growth rate of 10 per cent per annum. Assuming the growth rate in the market value of the units to be 6 per cent per annum Sara would receive Rs 581.900. In case of Sara's untimely death at the end of the ninth year. No additional tax burden for those investing mainly in debt unlike in MFs. his beneficiaries would receive the sum assured of Rs 532. Though he does not have a direct say in this but he does offer his choice in the form of investment. Assuming the growth rate in the market value of units is 6 per cent per annum. .900. • Best suited for medium risk taking individuals who wish to invest in equity and debt funds (at least 40% or higher exposure to debt).000 as it is the sum assured. ADVANTAGES OF ULIP • Can easily rebalance your risk between equity and debt without any tax implications.500. his family will get Rs 532. However. Hence. ULIPs were offering a good rate of return. The reality is that most of the ULIPs take more than 5 years to break even. where the fund would be compared with benchmarks. would have to look at quarterly statements. therefore. Policies where the costs are 65 per cent and upwards have not even recovered the principal despite the strongest bull market we have ever witnessed. At present. MUTUAL FUND . but now with a sudden downfall of the stocks. this may not be a true representation of the NAV. for example. ULIPs are bound to become negative investments. However. since there is no benchmark to measure one fund against the other.With stock markets soaring high a few months back. 55 per cent in equity and 45 per cent in debt. a policy-holder cannot understand the growth of his investments vis-à-vis other funds in the market. liquid and equity investments. Usually a policy-holder could ask his investment in a ULIP to be. An investor. as the ULIP could be a mix of debt. These components can be mixed according to his risktaking ability. By pooling money together in a mutual fund. . But the biggest advantage to mutual funds is diversification. investors can purchase stocks or bonds with much lower trading costs than if they tried to do it on their own. Mutual funds are one of the best investments ever created because they are very cost efficient and very easy to invest in (you don't have to figure out which stocks or bonds to buy).INTRODUCTION OF MUTUAL FUNDS:. The money thus collected is then invested in capital market instruments such as shares. Professional managers manage the affairs for a fee. debentures and other securities.ACCORDING TO AMFI (ASSOCIATION OF MUTUAL FUND OF INDIA): A Mutual Fund is a trust that pools the savings of a number of investors who share a common financial goal. Value of the portfolio and investors’ holdings. The funds are invested in a portfolio of marketable Securities. professionally managed basket of securities at a relatively low cost. alters with . The flow chart below describes broadly the working of a mutual fund. Thus a Mutual Fund is the most suitable investment for the common man as it offers an opportunity to invest in a diversified. CHARACTERISTICS OF A MUTUAL FUND: • • • • • Investors own the mutual fund. reflecting the investment objective. The income earned through these investments and the capital appreciation realized is shared by its unit holders in proportion to the number of units owned by them. 2. LowCosts: Mutual Funds are a relatively less expensive way to invest compared to directly investing in the capital markets because the benefits of scale in brokerage. 3. Diversification: Mutual Funds invest in a number of companies across a broad cross section of industries and sectors. you can get your money back promptly at AssetValue (NAV) related prices from the Mutual Fund itself.• Change in market value of investments. custodial and other fees translate into lower costs for investors. Return Potential: Over a medium to longterm. ADVANTAGES.With close-ended schemes. Liquidity: In open-ended schemes. 6. 5. Mutual Funds save your time and make investing easy and convenient.You achieve this diversification through a Mutual Fund with far less money than you can do on your own. 4. Convenient Administration: Investing in a Mutual Fund reduces paperwork and helps you avoid many problems such as bad deliveries. This diversification reduces the risk because seldom do all stocks decline at the same time and in the same proportion. Mutual Funds have the potential to provide a higher return as they invest in a diversified basket of selected securities. delayed payments and unnecessary follow up with brokers and companies. .. 9. Flexibility: Through features such as Systematic Investment Plans (SIP). DISADVANTAGES OF MUTUAL FUNDS: · No Guarantees: No investment is risk free.The operations of Mutual Funds are regularly monitored by SEBI. Choice of Schemes: Mutual Funds offer a variety of schemes to suit your varying needs over a lifetime. the proportion invested in each class of assets and the fund manager’s investment strategy and outlook. 10. 7. you can systematically invest or withdraw funds according to your needs and convenience. 8. Investors encounter fewer risks when they invest in mutual funds than when they . no matter how balanced the portfolio. Well Regulated: All Mutual Funds are registered with SEBI and they function within the provisions of strict regulations designed to protect the interests of investors. the value of mutual fund shares will go down as well. If the entire stock market declines in value. Systematic Withdrawal Plans (SWP) and dividend reinvestment plans. In mutual fund also there is certain amount of risk-return factor associated according to the investment option these are as follows. you will pay taxes on the income you receive. you will pay a sales commission if you buy shares in a Load Fund. you forego management risk. . Even if you don't use a broker or other financial adviser. even if you reinvest the money you made. If the manager does not perform as well as you had hoped. Closed-end or Open-end Open-end Funds: An open-end fund is one that has units available for sale and repurchase at all time. A measurement of an option position or premium in relation to the underlying instrument. · Fees and commissions: All funds charge administrative fees to cover their day-today expenses. · Taxes: During a typical year. An investor can buy or redeem units from the fund itself at a price based on the Net Asset Value (NAV) per unit. or financial planners. However. if you invest in Index Funds. most actively managed mutual funds sell anywhere from 20 to 70 percent of the securities in their portfolios. because these funds do not employ managers. RISK Equity Balanced Debt High Medium Low RETURN High Medium Low TYPES OF MUTUAL FUNDS: I. · Management risk: When you invest in a mutual fund. If your fund makes a profit on its sales. Some funds also charge sales commissions or "loads" to compensate brokers. financial consultants. you might not make as much money on your investment as you expected.buy and sell stocks on their own. you depend on the fund's manager to make the right decisions regarding the fund's portfolio. anyone who invests through a mutual fund runs the risk of losing money. Of course. equity mutual . after the 1999 Union Government Budget. No Load: Marketing of a new mutual fund scheme involves initial expense. 3. Funds that charge front-end. advertising and marketing expenses. Non-Tax exempt Funds: Generally. 2. SEBI has defined a “load” as the one-time fee payable by the investor to allow the fund to meet initial issue expenses including brokers’/agents’/distributors’ commissions. back-end or deferred loads are called load funds. However. In India. A load fund’s declared NAV does not include load charges III.Close-end Funds: A close ended fund makes a one-time sale of a fixed number of unit. In India. II. Load vs. Tax-exempt vs. Funds that make no such charges or loads for sales expenses are called no-load funds. By charging the fund/scheme with a fixed amount each year. by deducting a specific amount from his initial contribution: front-end or entry load. However. during the stated number of years: deferred load. At the time of investor’s entry into the fund/scheme. when a fund invests in tax-exempt securities. Funds do offer “buy-back of funds/units” thus offering another avenue for liquidity to closed-end fund investor. Three usual ways in which a fund’s sales expenses may be recovered from the investors are: 1. In other words. to provide liquidity to investors many closed-end funds get themselves listed on stock exchange. It does not allow investors to buy or redeem units directly from the funds. funds other than Equity Funds have to pay a distribution tax. by deducting a specific amount from the redemption proceeds payable to the investor: back end or exit load These charges made by the fund managers to the investors to cover distribution/sales/marketing expenses are often called “loads”. before distributing income to investors. it is called a tax-exempt fund. At the time of the investor’s exit from the fund/scheme. all of the dividend income received from any of the mutual funds is tax-free in the hands of the investors. These expenses may be recovered from the investors in different ways at different times. or short-term money market securities. B. Bonds and Money Market Funds.fund schemes are tax-exempt investment avenues. Different types of mutual fund Types of Mutual Fund: Once we have reviewed the fund classes. Funds are generally distinguished from each other by their investment objectives and types of securities they invest in. or Real Estate Funds. we are ready to discuss more specific fund types. A. and less for capital appreciation. So we have Equity. C. Thus. Broad Fund Types by Nature of Investments Mutual funds may invest in equities. we may have Gold or other Precious Metal Funds. But there are funds that invest in physical assets. Funds are therefore often grouped in order of risk. All of them invest in financial assets. Value Funds invest in equities that are considered under-valued today. while other funds are taxable for distributable income. Broad Fund Types by Investment Objective Investors and hence the mutual funds pursue different objectives while investing. Broad Fund Types by Risk Profile The nature of a fund’s portfolio and its investment objective imply different levels of risk undertaken. whose value will be unlocked in the future. Income Funds invest to generate regular income. For example. Equity Funds have a . bonds or other fixed income securities. Growth Funds invest for medium to long term capital appreciation. Thus. e. banks and financial institutions and other entities such as infrastructure companies/utilities. MIPs. with less diversification in its investment. Money Market Funds: Lowest rung in the order of risk level. Debt Funds (or Income Funds): Next in the order of risk level. specialized and offshore debt funds. issued by entities across all industries and sectors is a properly diversified debt fund. High yield Debt Funds: There are funds which seek to obtain higher interest rates by investing in debt instruments that are considered “below investment grade”. Gilt Funds: Gilts are government securities with medium to long-term maturities. since they invest in short-term fixed income securities. Other examples of focused funds include those that invest only in Corporate Debentures and Bonds or only in Tax Free Infrastructure or Municipal Bonds. Examples include sector. but also by private companies. which generally means securities of less than one-year maturity. For e. Investors have some lock-in period.g. we have the general category Debt Funds. A diversified debt fund is less risky than a narrow-focus fund that invests in debt securities of a particular sector or industry. Focused Debt Funds: Some debt funds have a narrow focus. typically of over one year (under one-year instruments being money market securities).g. Assured Return Funds – an Indian Variant: The SEBI permits only those funds whose sponsors have adequate net-worth to offer assurance of return. .greater risk of capital loss than a Debt Fund that seeks to protect the capital while looking for income. Money Market Funds are exposed to less risk than even the For internal use by Training Department of Prudential ICICI Mutual Fund Bond Funds. Debt funds invest in debt instruments issued not only by governments. Diversifies Debt Funds: A debt fund that invests in all available types of debt securities. Money Market Funds invest in securities of a short-term nature. as compared to longer-term portfolios of Bond Funds. Junk Bond Funds. These plans do not generally offer guaranteed returns. e. Very volatile & riskier.g. Equity Fund: As investors move from Debt Fund category to Equity Funds. b) Growth Fund • • • • Growth fund invest in companies whose earnings are expected to Rise above average rate. This scheme is for short-term investors who otherwise place money as fixed term bank deposits or inter corporate bonds. i) Sector Funds • • • Technology Fund Pharmaceutical Fund FMCG Fund . • • No guarantee returns High potential for growth of capital Types of Equity Fund a) Aggressive Growth Fund • • • Maximum capital appreciation Invests in less researched or speculative shares. Technology Fund Capital appreciation in 3 – 5 years Less volatile then aggressive growth fund. c) Specialty Fund They invest in companies that meet predefined criteria. they face increased risk level.Fixed Term Plan Series – Another Indian Variant: These are essentially closed-end. ii) Offshore Funds Invest in equities in one or more foreign countries. e) Equity Index Funds An index fund tracks the performance of a specific stock market index. The objective is to match the performance of the stock market by tracking an index that represents the overall market. d) Diversified Equity Funds A fund that seeks to invest only in equities. except for a very small portion in liquid money market securities. low market to book value ratios and are undervalued by other yardsticks. bur is not focused on any one or few sectors or shares. While exposed to all equity price risks. but usually has a lock-in period before the end of which funds cannot be withdrawn. Fund concentrate on future growth prospect having good potential. diversified equity funds seek to reduce the sector or stock specific risks through diversification. f) Value Funds Value Funds try to seek out fundamentally sound companies whose shares are currently under-prices in the market. iii) Small-Cap equity Funds Invest in shares of companies with relative lower market capital. Value Funds will add only those shares to their portfolios that are selling at low price-earnings ratios. g) Equity Income Funds . may be termed a diversified equity fund. The funds invest in share that constitute the index and in the same proportion on the index. i) Equity Linked Savings Schemes: An Indian Variant Investment in these schemes entitles the investor to claim an income tax rebate. • Real Estate Funds: Specialized Real Estate Funds would invest in Real Estate directly. • Commodity Funds: While all of the debt/equity/money market funds invest in financial assets. investing mainly in shares of companies with high dividend yields. or buy shares of housing finance companies or may even buy their securities assets. or may fund real estate developers. or lend to them.physical assets. Following are the different products and services Offered by Mutual Fund Companies • • • • • • • • Open ended schemes Close ended schemes Growth/Equity oriented Schemes Income/Debt oriented Schemes Balanced Funds Money market or liquid funds Gilt Funds Index Funds . the mutual fund vehicle is suited for investment in any other.for examples. Such funds are termed “hybrid funds” as they have a dual equity/bond focus. debt and equity) different types of securities in their portfolios. • Hybrid Funds – Quasi Equity/Quasi Debt: Many mutual funds mix these (money market.There are equity funds that can be designed to give the investor a high level of current income along with some steady capital appreciation. changes every day. sale and repurchase of units happen on a continuous basis. from the fund itself.• • • • • • • Exchange Traded Funds Sectoral Funds Thematic Funds Commodity Funds Real Estate Funds Tax Saving Funds Hybrid Funds There are several ways for investment and disinvestments in mutual funds such as : • • • • • • Systematic Investment Plans (SIPs) Value Averaging Systematic Transfer Plans (STPs) Systematic Withdrawal Plans(SWPs) Automatic Reinvestment Plans. • Close ended fund A closed-end fund offers units for sale only in the NFO. at NAV related prices. . Open ended fund In an open-ended fund. The corpus of open-ended funds. therefore. It is then listed in the market. of units held with remains constant. where the incomes earned are retained in the investment portfolio and allowed to grow. In this option. • Growth fund Provide capital appreciation over the medium to long-term • Investor who does not require periodic income distribution can choose the option. Usually closed-end funds sell at a discount to NAV. • The investor choosing this option will vary the NAV with the value of the investments portfolio . the no. • • • • Income fund Provide regular and steady income to investor Balanced fund Provide both growth and regular income. of units held by the investor will change with every reinvestment. The corpus of a closed-end fund remains unchanged. Money market fund Provide easy liquidity. regular income and preserve the income Tax saving scheme offer tax rebeats to the under specific provisions of the Indian income tax laws Investment made under some schemes are allowed as deduction U/S 88 of the income tax act . • The return to the investor who chooses a growth option is the rate at which his initial investment has grown over a period for which he has invested in the fund. rather than being distributed to investors. The value of units will be similar to that under the dividend option . • Investors with longer investment horizons and limited requirements for income choose this option. • Automatic Reinvestment Plans Reinvestment of amount of dividend made by fund in the same fund. while the no.Investors wanting to buy or sell the units have to do so in the stock markets. investor has to impose voluntary self discipline. It gives option to the investor if the current fund performance in not satisfactory. Dividends are paid in the form of warrants or are directly credited to the investor’s bank accounts. . It is Voluntary Accumulation Plan. This plan is suitable for retired person. • Systematic Investment Plans( SIP) – For regular investment SIP is investing a fixed sum periodically in a disciplined manner for long term. In VAP. as an and when dividends are declared. Many SIP gives insurance benefits. It gives benefit of Rupee Cost averaging. In SIP monthly minimum Rs. • Systematic Transfer Plan ( STP) – Transfer on a periodic basis a specified amount from one scheme to another within the same fund family. Investors can invest particular amount one time for fixed time of period. It allows the investor flexibility with respect to the amount and frequency of investment.100 are invested. Interest is calculating compoundly.. Remaining amount will grow continuously.There are four types of plans as follows • Lump sum Investment It is one time investment. • Systematic Withdrawal Plan ( SWP) – For regular income The lump sum amount is invested for one time and then fixed percent amount is withdraw monthly. • Dividend option • • Investors will receive dividends from the mutual fund . because it gives regular income. VAP is modified version of SIP.500 or Rs. REGULATORS IN INDIA • SEBI . Investors can choose the frequency of dividend distribution that suits their requirements. periodicity of dividends is left to the fund managers. The NAV of this investors holding will vary with changes in the value of portfolio and the impact of the proportion of income earned by the fund to what is actually distributed as dividend.half-yearly and annual. accounts.quarterly. • RBI as supervisor of Money Market Mutual Funds . Recently.monthly. SEBI issues guidelines for all mutual funds operations . the timing of the dividend payout is decided by fund manager. Mutual funds provide the option of receiving dividends at pre-determined frequencies. Hence in the past Money Market Mutual Funds scheme of Mutual funds had to be abide by policies laid down by RBI. bank owned funds to be under supervision of RBI and SEBI. • • Investors choosing this option have a fixed no.investment. of units invested in the fund and earned incomes on this investment. • RBI as supervisor of banks owned mutual funds .wich can vary from daily.RBI has supervisory responsibility over all entities that operate in the money markets. .As banks in India came under the regulatory jurisdiction of RBI.weekly.• • In normal dividend plan . SEBI requires all mutual funds to be registered with them. it has been decided that Money Market Mutual Funds of registered mutual funds will be regulated by SEBI through SEBI (Mutual Fund) Regulations 1996. expenses etc.The capital markets regulators also regulates the mutual funds in India. This is in stark contrast to conventional insurance plans where the sum assured is the starting point and premiums to be paid are determined thereafter. In ULIPs.e. ULIP investors also have the choice of investing in a lump sum (single premium) or using the conventional route. The minimum investment amounts are laid out by the fund house. Generally speaking. balanced funds and debt funds to name a few. half-yearly.e. i. determining the premium paid is often the starting point for the investment activity. ULIPs can be termed as mutual fund schemes with an insurance component. conversely an individual faced with a liquidity crunch has the option of paying a lower amount (the difference being adjusted in the accumulated value of his ULIP). making premium payments on an annual. However it should not be construed that barring the insurance element there is nothing differentiating mutual funds from ULIPs 1. Mode of investment/ investment amounts Mutual fund investors have the option of either making lump sum investments or investing using the systematic investment plan (SIP) route which entails commitments over longer time horizons. Similarly ULIP investors have the option of investing across various schemes similar to the ones found in the mutual funds domain. diversified equity funds. The freedom to modify . quarterly or monthly basis. ULIP investors also have the flexibility to alter the premium amounts during the policy's tenure. As is the cases with mutual funds. i.COMPARISON OF ULIP VS MUTUAL FUND Unit Linked Insurance Policies (ULIPs) as an investment avenue are closest to mutual funds in terms of their structure and functioning. investors in ULIPs are allotted units by the insurance company and a net asset value (NAV) is declared for the same on a daily basis. For example an individual with access to surplus funds can enhance the contribution thereby ensuring that his surplus funds are gainfully invested. premium payments at one's onvenience clearly gives ULIP investors an edge over their mutual fund counterparts. 3. Investors get the opportunity to see where their monies are being invested and how they have been managed by studying the portfolio. albeit most fund houses do so on a monthly basis. sales and marketing. expenses charged for various activities like fund management. i. Similarly funds also charge their investors entry and exit loads (in most cases. The only restraint placed is that insurers are required to notify the regulator of all the expenses that will be charged on their ULIP offerings. This explains the complex and at times 'unwieldy' expense structures on ULIP offerings. Entry loads are charged at the timing of making an investment while the exit load is charged at the time of sale. Expenses can have far-reaching consequences on investors since higher expenses translate into lower amounts being invested and a smaller corpus being accumulated. Insurance companies have a free hand in levying expenses on their ULIP products with no upper limits being prescribed by the regulator. For example equity-oriented funds can charge their investors a maximum of 2. the Insurance Regulatory and Development Authority. 2.e. administration among others are subject to pre-determined upper limits as prescribed by the Securities and Exchange Board of India. . Portfolio disclosure Mutual fund houses are required to statutorily declare their portfolios on a quarterly basis.5% per annum on a recurring basis for all their expenses. Expenses In mutual fund investments. either is applicable). any expense above the prescribed limit is borne by the fund house and not the investors. There is lack of consensus on whether ULIPs are required to disclose their portfolios. Some insurance companies do declare their portfolios on a monthly/quarterly basis. the other believes that there is no legal obligation to do so and that insurers are required to disclose their portfolios only on demand. regular portfolio disclosures on the other hand can enable investors to make timely investment decisions. While one school of thought believes that disclosing portfolios on a quarterly basis is mandatory. he could have to bear an exit load and/or entry load. For example plans that invest their entire corpus in equities (diversified equity funds).). offerings in both the mutual funds segment and ULIPs segment are largely comparable. Effectively the ULIP investor is given the option to invest across asset classes as per his convenience in a cost-effective manner. If a mutual fund investor in a diversified equity fund wishes to shift his corpus into a debt from the same fund house. During our interactions with leading insurers we came across divergent views on this issue. However the lack of transparency in ULIP investments could be a cause for concern considering that the amount invested in insurance policies is essentially meant to provide for contingencies and for long-term needs like retirement. 4. a 60:40 allotment in equity and debt instruments (balanced funds) and those investing only in debt instruments (debt funds) can be found in both ULIPs and mutual funds. Flexibility in altering the asset allocation As was stated earlier. . Tax benefits ULIP investments qualify for deductions under Section 80C of the Income Tax Act. Similarly. only investments in tax-saving funds (also referred to as equity-linked savings schemes) are eligible for Section 80C benefits. balanced funds). 5. . This holds good.This can prove to be very useful for investors. it is vital for investors to be aware of the nuances in both offerings and make informed decisions. debt-oriented funds attract a long-term capital gains tax @ 10%. the gains are tax free. irrespective of the nature of the plan chosen by the investor. he can book profits by simply transferring the requisite amount to a debt-oriented plan. On the other hand in the mutual funds domain. Despite the seemingly similar structures evidently both mutual funds and ULIPs have their unique set of advantages to offer. Maturity proceeds from ULIPs are tax free. As always. for example in a bull market when the ULIP investor's equity component has appreciated. conversely investments sold within a 12-month period attract short-term capital gains tax @ 10%. if the investments are held for a period over 12 months. while a shortterm capital gain is taxed at the investor's marginal tax rate. In case of equity-oriented funds (for example diversified equity funds. . However. August 2006). The Economic Times.Madhu T. their growth mutual fund has given better returns than the "MAXIMISER" option of their ULIPs.REVIEW OF LITERATURE Mr.(Deepak Shenoy.Madhu T. the new directive would be beneficial for both the industry and investors in the long run.Deepak Shenoy . independent financial advisors believe that though there is a possibility of some distributors favoring ULIPs in the short term. ULIPs offer attractive frontend commissions to agents. over the last three years. he reveals that. Mr.in his article ‘Comparing ULIP returns to Mutual Funds’.The findings shows that distributors would push unit linked insurance plans (ULIPs) to earn better commission.June2009). made a study on ‘ULIPs hold edge over mutual funds’. The Indian Investor’s Blog.(Mr. Bernz Jayma P.Murthaza and Sony.’If you're new to stock market investing you may have heard that mutual funds would be a good way for you to get started. made a study on ‘Mutual Fund disadvantages’. This article is an initiative from Bajaj Allianz to create better understanding of ULIPs and its benefits so that investors can avail maximum returns from their investments. That's actually good advice. He suggested that .Mr. in their article ‘An Overview on ULIP’. but mutual funds have their own pitfalls to watch out for.’ DATA INTERPRETATION AND ANALYSIS . Mr. 0 26.0 100.0 .0 Percent 74.0 Valid Percent 74.0 26.0 100.0 100.(A) Gender: Gender Cumulative Frequency Valid Male Female Total 37 13 50 Percent 74. 74% of the respondents are male policy holders and the rest 26% are female policy holders.INTERPRETATION : The above graph shows that .0 100. (B) Marital Status: Marital Cumulative Frequency Valid Married Unmarried Total 33 17 50 Percent 66.0 34.0 34.0 100.0 Percent 66.0 Valid Percent 66. out of 50 customers.0 .0 100. 0 28.0 Percent 12. 66% of the policy holders are unmarried and the rest 34% of the policy holders are married.0 22.0 34.0 96.0 Valid Percent 12.0 4.0 74.0 28. (C) Age: Age Cumulative Frequency Valid 20-30 30-40 40-50 50-60 60-70 Total 6 14 17 11 2 50 Percent 12.0 4.0 22.0 34.0 100.INTERPRETATION : From a sample of 50 customers.0 100.0 100.0 .0 40. 0 28. (D) Occupation: Occupation Cumulative Frequency Valid Government Private service Business NRIs Others Total 18 14 11 3 4 50 Percent 36.34%.0 6.0 .0 Valid Percent 36.0 Percent 36.0 28. 22% were in the age group of 50-60 yrs and 4% were in the age group of 60-70 yrs.0 100.INTERPRETATION : The graph shows that majority of the sample respondents were in the age group of 40-50 yrs ie.0 6. 12% were in the age group of 20-30 yrs & 28% of them were 30-40 yrs.0 8.0 92.0 22.0 86.0 64.0 8.0 100.0 100.0 22. 0 12.0 4.INTERPRETATION : The graph shows that majority of the policy holders are working in the Government sector i.0 12.0 4.0 96. 28% of them are engaged in Private service.0 46.0 Percent 38.0 100.0 100.e.0 Valid Percent 38.0 46.0 . (E) Annual Income: Annual income Cumulative Frequency Valid Below 2 lakhs 2-4 lakhs 4-6 lakhs 6-8 lakhs Total 19 23 6 2 50 Percent 38. 22% of them are business field. 6% of them are NRIs and 8% of them are engaged other works.0 100.0 84.36% . 0 14.0 54.0 94. Sources that helps you in making investment decision.0 Percent 10.0 4. Cumulative Frequency Valid Financial journal Television Brokers/Agent Friends Consultants Total 5 2 27 13 3 50 Percent 10.0 Valid Percent 10.0 6. 1. 3 of the policy holders get a salary below 2 lakhs and 4% of them above 6-8 lakhs. 12% of the policy holders get a salary of 4-6 lakhs.0 100.0 68.0 54.0 4.0 6.0 100.0 .0 26.0 26. 38% of the policy holders get a salary of below 2 lakhs. Sources that helps you in making the investment decisions.INTERPRETATION : The graph shows that 46% of the policy holders get a salary of 2-4 lakhs.0 100. 0 8.0 100. and 4% selects television as the source.0 8. 54% of the customers are strongly agree that the agents or brokers helps them to make investment decision.0 28.0 92.INTERPRETATION : From the sample of 50 customers. 2.0 6.0 .0 64.0 28.0 54.0 Percent 4. 26% of the customers point out their friends take part in the investment decision.0 54. Factors that influence your investment decisions in a particular company. Cumulative Frequency Valid Attractive schemes Tax benefits High reputation Rate of return Variety of products 2 27 3 14 4 Percent 4. And 10% customers reveal that the financial journals helps them. Remaining 6% is from consultants.0 Valid Percent 4. Factors that influence your investment decision in a particular company.0 6.0 58. 0 13 1 6 28 2 Percent 26.0 100.0 Valid Percent 26.0 4.0 56.0 100.0 2.0 96. and remaining 4% pointing out the attractive schemes. You generally like to invest money.0 40.0 Percent 26. You generally like to invest money in. Cumulative Frequency Valid Insurance Stock market Mutual fund Bank deposit Both insurance and mutual fund Total 50 100.0 .0 12.Total 50 100.0 100.0 2.0 56. variety of products from the company attracts 8% customers.0 28.0 12.0 INTERPRETATION : 54% customers agree that the tax benefit is influence them to buy policy .0 4.28% looks the rate of return what they will earn. and high reputation of the company attracts 6% of the customers. 3. 0 8.0 . 56% of the customers invest money in bank deposit.0 8.0 100.0 6. According to you who among the following life insurance companies is best. then 4% in both insurance and mutual fund. Cumulative Frequency Valid Bajaj Allianz HDFC Standard life Tata AIG Aviva Life SBI Life Total 27 5 4 3 11 50 Percent 54. According to you who among the following life insurance company is best.12% in mutual fund.INTERPRETATION : From a sample of 50 customers. 4.0 6.and remaining 2% in stock market.0 Valid Percent 54. 26% in insurance sector.0 22.0 64.0 10.0 100.0 72.0 22.0 Percent 54.0 78.0 100.0 10. 8% for Tata AIG and remaining 6% stands for Aviva life insurance company. Cumulative Frequency Valid Excellent Good Fair Poor Total 2 37 9 2 50 Percent 4. 5.0 100.0 74.0 100.0 Percent 4.0 78.0 18.INTERPRETATION : From a sample of 50 customers.0 18.0 100.0 96.0 4.54% customers select Bajaj Allianz is the best insurance company. How would you rate our products. and 22% customers choose SBI Life.10% select HDFC.0 Valid Percent 4. How would you rate our products.0 4.0 74.0 . Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 2 33 8 5 2 50 Percent 4.0 86.74% customers thinks that the products offered by Bajaj Allianz Life insurance co. I would like to invest money in ULIP. is good.0 100.0 100.0 96.0 16.0 16.0 70. 6.0 10. and remaining 4% not satisfied with our products.4% thinks its excellent.0 10.0 .0 Valid Percent 4.0 4. I would like to invest money in ULIP.INTERPRETATION : From a sample of 50 customers.0 4.0 66.0 Percent 4.0 100.0 66.18% of them select Bajaj Allianz products are fair. 0 .0 Percent 28.0 92.0 4. 4% of them strongly supporting that fact. Reason for choosing ULIPs because of insurance coverage. and 16% has no opinion about it.INTERPRETATION : From a sample of 50 customers. And 4% strongly disagreed.0 64. 7. Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Total 14 32 2 2 50 Percent 28.0 64.0 4. Reason for choosing ULIPs because of insurance coverage. 66% agree.0 100.0 100.0 100. remaining 10% also disagree with investment in ULIP.0 4.0 4.0 Valid Percent 28.0 96. 8.28% of them strongly support it.0 100.0 96.4% customers didn’t say anything.0 60.0 Valid Percent 6. I would like to invest money in Mutual Funds. 64% of the customers agree.0 28. Cumulative Frequency Valid Strongly agree Agree Neutral Dsagree Strongly disagree Total 3 13 14 18 2 50 Percent 6.0 36.0 28.INTERPRETATION : From a sample of 50 customers.0 .0 32.0 100.0 4.0 36. I would like to invest money in mutual funds. .0 26.0 4.0 Percent 6. So we can see that most of the Customers choose ULIP because of insurance coverage.0 100.0 26. and remaining 4% disagree with that fact. 0 4. Mutual funds are more risky than ULIP products.0 88.0 . 9. Mutual funds are more risky than ULIP products.26% of the customers agree with that fact.And remaining 10% disagreed.0 54.0 54.out of this 10%.INTERPRETATION : From a sample of 50 customers.0 Valid Percent 34. 4% strongly disagreed with it.0 96.0 100.6% of the customers strongly support it.0 Percent 34.0 4. Cumulative Frequency Valid Strongly agree Agree Neutral disagree Total 17 27 4 2 50 Percent 34.and 28% customers have no idea about it.0 100.0 100.0 8.0 8. INTERPRETATION : From a sample of 50 customers.54% of the customers thinks that mutual funds are more risky than ULIP products. Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Total 12 31 5 2 50 Percent 24.0 100.8% customers have no opinion about it.0 96.and remaining 4% disagree with it.0 10.0 86.0 .0 100. ulip has advantage over mutual funds.0 62.0 Valid Percent 24. ULIPs have advantage over Mutual funds.0 4.34% strongly agree with this statement.0 4. 10.0 Percent 24.0 10.0 62.0 100. Do you think the safety factor is important in your investment in ULIP.INTERPRETATION : 62% of the customers agree with ULIP have advantage over mutual fund statement.0 6. And 4% of customers not supporting the statement.0 .0 Valid Percent 8.0 6.0 100.0 100.24% customers strongly agree with this fact.0 4. Safety Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 4 26 2 15 3 50 Percent 8.0 100.0 4.0 30.0 52.0 Percent 8.0 64. And remaining 10% have no opinion about it.0 60.0 52. 11.0 30.0 94. 0 100.6% strongly disagree.INTERPRETATION : From a sample of 50 customers. and remaining 4% have no opinion about safety factor is important in the investment of ULIP.0 10.0 Valid Percent 6.0 10.0 26.0 60.0 10.52% customers agree.30% customers were disagree with that fact. 12.0 14. Liquidity Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 3 5 5 30 7 50 Percent 6.0 Percent 6.0 100.0 10.0 16. Do you think the Liquidity factor is important in your investment in ULIP.0 86.0 60.0 .8% strongly agree.0 14.0 100. Rate of return Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 6 21 3 12 8 50 Percent 12.and remaining 10% neither agree nor disagree with that statement. majority of the customers disagree i. 13.0 24.0 6.0 Percent 12.e.INTERPRETATION : From a sample of 50 customers.10% agree. 60%.0 24.0 100.0 . And 6% strongly agree.0 42.0 100.0 16.0 16. Do you think the Rate of return factor is important in your investment in ULIP. 14% strongly disagree with that fact.0 84.0 42.0 100.0 54.0 Valid Percent 12.0 60.0 6. 0 4. and remaining 6% neither agree nor disagree with that statement 14.0 42.0 . 42%. majority of the customers agree i.0 100.0 42.0 10.0 4. 12% strongly agree with that fact.0 96.0 10.0 100. And 24% disagree.e.0 54.0 64. Tax savings Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 6 21 5 16 2 50 Percent 12.0 100.0 Percent 12.0 32.16% strongly disagree. Do you think the Tax savings is influence your investment decision in ULIP.INTERPRETATION : From a sample of 50 customers.0 Valid Percent 12.0 32. 0 46. And 32% disagree.0 100.0 8.0 32. Past scheme’s performance influence your investment decision in ULIP.INTERPRETATION : From a sample of 50 customers. and remaining 10% neither agree nor disagree with that statement 15.0 14.e.0 16.0 16.0 92. 42%.0 Valid Percent 16.0 .0 14. majority of the customers agree i.0 100. past scheme's performance Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 8 8 7 23 4 50 Percent 16.0 46.0 8.4% strongly disagree.0 46.0 100.0 Percent 16. 12% strongly agree with that fact. 8% strongly disagree with that fact.0 100.0 10.0 .0 12. Advertisement influence the investment decision in ULIP.0 78.0 10.0 22.0 Valid Percent 18. Advertisement Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 9 11 19 5 6 50 Percent 18.0 40.0 100.0 100.0 Percent 18.0 88.0 38.e. And 16% strongly agree. 46%. and remaining 14% neither agree nor disagree with that statement 16.0 12.0 22.0 38. majority of the customers disagree i.INTERPRETATION : From a sample of 50 customers.16% agree. 0 28. 18% strongly agree with that fact. and remaining 38% neither agree nor disagree with that statement.0 12. Safety Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 2 4 8 30 6 50 Percent 4. Do you think the safety factor is important in your investment in mutual fund.0 16. 17.0 100.0 16.0 8.0 .0 Valid Percent 4.0 12.12% strongly disagree. And 10% disagree.0 12.0 Percent 4.0 60.0 8. 22%agree.0 88.0 60.0 100.INTERPRETATION : From a sample of 50 customers.0 100. 0 12.0 30.0 6.8% customers agree.0 100.0 100.0 Percent 14.0 .0 12.0 30.INTERPRETATION : From a sample of 50 customers.0 38. 18.4% strongly agree.0 Valid Percent 14.0 52.0 94. and remaining 16% have no opinion about safety factor is important in the investment of mutual fund.60% customers were disagree with that fact 12% strongly disagree.0 100.0 6. Liquidity Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 7 19 15 6 3 50 Percent 14. Do you think the Liquidity factor is important in your investment in mutual fund.0 38.0 82. e.0 90. And 12% disagree.0 Valid Percent 4. Do you think the Rate of return factor is important in your investment in mutual fund. and remaining 30% neither agree nor disagree with that statement.0 Percent 4. majority of the customers agree i. 14% strongly agree with that fact.0 100.6% strongly disagree. Rate of return Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 2 7 21 15 5 50 Percent 4. 19.0 60.0 100.0 30.0 14.0 10.0 10.0 42. 38%.0 42.0 18.0 100.0 .INTERPRETATION : From a sample of 50 customers.0 14.0 30. And 14% agree.0 46.0 64.0 100.4% strongly agree.0 46.0 Valid Percent 6. Do you think the Tax savings is influence your investment decision in mutual fund.0 24.0 .0 12.0 12. 30% disagree.0 24.INTERPRETATION : From a sample of 50 customers.0 12. 10% strongly disagree with that fact.0 100.0 100.0 18.0 12.0 88. Tax savings Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 3 6 23 12 6 50 Percent 6. and remaining 42% neither agree nor disagree with that statement.0 Percent 6. 20. INTERPRETATION : From a sample of 50 customers.0 44.0 86.0 56.0 100.0 100. 12% strongly disagree with that fact.0 30.0 100. Past scheme’s performance influence your investment decision in mutual fund. past scheme's performance Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Total 6 22 15 7 50 Percent 12.0 44. and remaining 46% neither agree nor disagree with that statement.0 Percent 12. 24% disagree.0 14. 21.6% strongly agree.0 14.0 30.0 Valid Percent 12. And 12% agree.0 . 0 88.0 4.0 40.0 4.0 Valid Percent 8.0 Percent 8.0 32.0 100.0 .0 8. 44% agree.INTERPRETATION : From a sample of 50 customers.0 100. 22. 12% strongly agree with that fact. Advertisement influence the investment decision in mutual fund. Advertisement Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 4 16 24 4 2 50 Percent 8.0 48.0 96.0 100.0 32.0 8. and remaining 30% neither agree nor disagree with that statement. And 14% disagree.0 48. and remaining 24% neither agree nor disagree with that statement.0 12.0 30.32% agree with that fact.0 Percent 46.0 100.0 12.0 .INTERPRETATION : From a sample of 50 customers.0 76.0 100.0 8. I would like to reinvest my funds in the same company again. Reinvestment in the same company again Cumulative Frequency Valid Strongly agree Agree Neutral Disagree Strongly disagree Total 23 15 6 4 2 50 Percent 46.0 8.0 96. 23.0 4.4% disagree. 8% strongly agree.0 4.0 Valid Percent 46. And 8% strongly disagree.0 30.0 88.0 100. Reason for choosing ULIPs because of . And remaining 12% not satisfied with Bajaj Allianz. H1: There is relationship between investment of ULIP and insurance coverage. HYPOTHESIS-1 H0: There is no relationship between investment of ULIP and Insurance coverage. 30% customers also agree with it.INTERPRETATION : 46% of the customers express their satisfaction level with Bajaj Allianz service. They Strongly agree with the statement. CORRELATIONS Correlations I would like to invest money in ULIP. And 12% have neutral situation. I would like to invest money in ULIP.005.729 ** 1 . Pearson Correlation Sig. T-Test Group Statistics Annuaincome N Mean Std.000 which shows that there is a relationship between investment of ULIP and insurance coverage. Deviation Std. INTERPRETATION: The above table shows that the reason for choosing ULIPs because of insurance coverage is 0. So we can conclude that most of the customers prefer ULIP products than Mutual funds because of insurance coverage. (2-tailed) N 50 . Pearson Correlation Sig.insurance coverage.729** .000 50 1 .We can choose alternate hypothesis because the significant value is less than 0.Hence it is very clear that most of the customers choosing ULIP product because which provide insurance coverage over their investment.01 level (2-tailed). (2-tailed) N Reason for choosing ULIPs because of insurance coverage. Error Mean . Correlation is significant at the 0.000 50 50 **. HYPOTHESIS-2 H0: There is no relationship between the investment pattern and annual income of the customers. H1: There is a relationship between the investment pattern and annual income of the customers. 632 .372 .172 . Equal variances not assumed I would like to invest money in mutual funds.263 .000 .37 4.000 19 Equal variances assumed 19 Sig. I would like to invest money in mutual funds. Below 2 lakhs 6-8 lakhs Below 2 lakhs 6-8 lakhs 19 2 19 2 2.263 -.806 .000 Equal variances assumed 18.657 .428 Sig. (2-tailed) .451 1.000 Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means F I would like to invest money in ULIP.247 t .00 . Equal variances not assumed 18.00 3.956 Equal variances assumed 1.955 .632 -.010 Mean Difference .914 -2.I would like to invest money in ULIP.424 .26 2.061 -.000 .185 .219 .000 . Equal variances not assumed Equal variances assumed 3. Equal variances not assumed I would like to invest money in mutual funds. .882 Independent Samples Test t-test for Equality of Means df I would like to invest money in ULIP. Which shows that there is no relationship between the investment pattern and annual income level of the customers. • As insurance sector is growing rapidly so most of the life insurance players are selling ULIP plans. • While investing any insurance company customer prefers for good branded company Bajaj is India’s one of the most famous and richest family. Since last 4-5 years the returns provided by ULIP were very good so people tend more towords ULIP • Middle class people who are interested in investment but they are not aware of such options so more awareness should be there. FINDINGS AND SUGGESTIONS After survey there are some findings and suggestions as follows.but not the decision.005.061 for Mutual Funds. .247 for ULIP and 0.INTERPRETATION: The above table shows the significance value of the relationship between investment pattern and annual income is 0. owned company so people want security for their investment. And the awareness about ULIP is growing most of the people knows the ULIP of life insurance. as main target customer are the middle class peoples.It may be change the premium of the policy. And second preference is given to SBI life as many people perceive that SBI Life is a govt.We can choose Null hypothesis because the significant value is greater than 0.Hence it is very clear that the income level does not take part in the investment decision. . so from this we can analyze that Bajaj Allianz Life Insurance is doing good but it is having good potential in Market. • 54% of people given Best rating to the Bajaj Allianz Life Insurance ULIP. First reason or preference that why an investor is interested in ULIP is Investment Purpose. • • While investing in mutual fund 44% of the customers looks their return. 6% customers are attracted due to the high reputation of the company. and second is to its returns and after that they investing because they are getting the tax benefit. 12% of people had invested in Mutual Fund and 26% people had invested in ULIP and 4% people had invested in both the options. So awareness about MF is not very good and it can be improved. To improve its market share they should improve the awareness level of the common people. So if BALIC wants to penetrate its market share they should improve the marketing strategy. • While survey I found that many all customers had already invested in ULIP and Mutual Fund some people had invested in both options. improving the distribution channel etc. The second choice of the investors is return of their money.42% customers observe the scheme’s performance in past years. • Innovative Products and good brand name are the main success factor for Bajaj Allianz Life Insurance.• As now till date people in India don’t wanted to invest in share market because then were thinking that it is a bad thing but as the awareness about Mutual fund is increasing as more and more private players are entering in the market. Then again there are some people who are investing for pension planning and security. • In future people will be more preferring to the security of their money means they want a secured option which should provide good returns. As ULIP are the option in which you can have the security also and good returns. CONCLUSION AND/OR RECOMMENDATIONS From above analysis and survey we can conclude as follows • Awareness of ULIP is increasing as more number of private players are entering in life insurance industry. Investors in Bajaj Allianz Life ULIP will be getting the advantage of life insurance cover. . • • ULIP differentiate from Mutual fund in respect of Insurance cover. • Mutual Fund is also getting more and more famous in Indian market as many private companies innovating new funds as the investors demand. • Bajaj Allianz should go for innovating more and more products and improving the distribution channels as per the area of sales. There is a need for insurers to undertake a demand audit in order to understand what the policyholder wants and needs. • Deriving the right feedback from customers and bringing out innovative products which cater to customer demands will go a long way in tapping the market potential of the insurance and Mutual fund sector. • Mutual fund is having good growth but many customers from rural areas don’t have any knowledge about Mutual fund. • For Bajaj Allianz Life Insurance They should go for creating more awareness about its ULIP as now also people are just investing because Bajaj is India’s most Known and Favorite brand in past. BIBLIOGRAPHY REFERENCE: .They think it is very risky. People in Changanacherry are investing in only good branded companies as they don’t believe on other financial companies for taking ULIP. • • • Even investors from cities like Changanacherry don’t have that much of Knowledge about fund selection they all are depend on Brokers.• People are turning towords the ULIP as a good investment option but as ULIP is in its starting phase so customers are preferring only big brands. 1) Research Methodology, C.R Kothari, 2nd edition 2) Outlook Money, 15 May 2005, “ULIP Mania”. 3) The Business Line, 10 June 2007, “Know all About ULIPS”. WEBSITE QUESTIONNAIRE I am RAJEEV JOSEPH student of Karunya School of Management, Coimbatore doing a project on “A COMPARATIVE STUDY OF ULIP PLANS OF BAJAJ ALLIANZ LIFE INSURANCE WITH MUTUAL FUNDS” and this questionnaire is a part of the project and the information collected through this questionnaire would be used only for academic purposes and strictly confidential PERSONNAL INFORMATION 1. Name: 2. Gender: (a) Male 3. Marital status: (a) Married 4. Age: (a) 20-30 (c) 40-50 (e) 60-70 5. Occupation: (a) Government Service (c) Business (e) Others 6. Annual Income: (a) Below 2 lakhs lakhs (c) 4- 6 lakhs (e) Above 8 lakhs (b) 2-4 (d) 6-8 lakhs (b) Private (d) NRIs (b) 30-40 (d) 50-60 (b) Unmarried (a) Female 1. Sources that helps you in making the investment decisions. (a) Financial journal Television (c) Brokers or agents Friends (e) Consultants 2. Factors that influence your investment decisions in a particular company. (a) Attractive schemes benefits (c) High reputation of return (e) Variety of products 3. You generally like to invest money. (a) Insurance Market (c) Mutual Fund deposits (e) Both insurance and mutual fund 4. According to you who among the following Life Insurance companies is best. (a) BAJAJ ALLIANZ STANDARDLIFE (c) TATA AIG INSURANCE (b) HDFC (d) AVIVA LIFE (b) Stock (d) Bank (b) Tax (d) Rate (b) (d) (e) SBI LIFE 5. How would you rate our products. (a) Excellent Good (c) Fair (d) Poor (e) Very poor (b) 6. I Would like to invest money in ULIP. (a) Strongly agree Agree (c) Neutral Disagree (e) Strongly disagree 7. Reason for choosing ULIPs because of insurance coverage. (a) Strongly agree Agree (c) Neutral Disagree (e) Strongly disagree 8. I would like to invest money in Mutual Funds. (a) Strongly agree Agree (c) Neutral Disagree (b) (d) (b) (d) (b) (d) ULIPs have advantage over Mutual funds.(e) Strongly disagree 9. Mutual funds are more risky than ULIP products. (a) Strongly agree Agree (c) Neutral Disagree (e) Strongly disagree 10. Strongly agree (11) Safety (12) Liquidity (13) Rate of Return (14) Tax savings (15) past scheme’s Performance (16) Rating of ULIP by Agencies (17)Advertisem ents Agree Neutral Disagree Strongly disagree (b) (d) (b) (d) . (a) Strongly agree Agree (c) Neutral Disagree (e) Strongly disagree Do you view following factors/sources of information important while investing in ULIP. I would like to reinvest my funds in the same company again. Strongly agree (11) Safety (12) Liquidity (13) Rate of Return (14) Tax savings (15) past scheme’s Performance (16) Rating of ULIP by Agencies (17)Advertisem ents Agree Neutral Disagree Strongly disagree 18.Do you view following factors/sources of information important while investing in Mutual Funds. (a) Strongly Agree Agree (c) Neutral Disagree (e) Strongly disagree (b) (d) .
https://www.scribd.com/doc/103556613/acomparativeanalysisofulipofbajajallianzlifeinsuranceco-110412011730-phpapp02
CC-MAIN-2017-26
refinedweb
14,971
59.4
Hi Horst, Thanks for your reply. As per the studio library it says that on test tenant following restrictions apply "Any data created for the original solution in the SAP cloud solution and any scoping decisions and fine tuning settings are not copied to the patch solution. For testing purposes, you must therefore scope the patch solution and create new test data." We understand with the first patch that it is deployed, we have to download the data from old solution and upload to the new solution since the namespace is changed. Do we need to carry out this activity for each and every patch we deploy or this is required only for first patch? Best Regards, Gauri Hello Gauri, No, this activity is only required for the very first patch. With further patches the old patch would just re-opened for development with a higher version number together with the existing data. Bye, . Horst Hello Gauri, we encountered the same problem which could be solved by uploading the patch soluiton into the original solution in the test tenant. This means : Test tenant: Patch solution version: 2 Original solution version: 1->2 Prod Tenant: Original solution version: 1->2 Give me feedback if this info was helpful. Kind Regards, Johannes Add comment
https://answers.sap.com/questions/403626/sap-hybris-cloud-for-customer-test-tenant-issues-a.html
CC-MAIN-2019-09
refinedweb
212
58.72
Deploying GWT Applications in SpringSource dm Server - Part 3 Introduction This is the final blog in a series of three describing a step-by-step approach to building and deploying GWT applications in the SpringSource dm Server™. The first blog looked at the process of creating a simple WAR file from a sample GWT application and the second blog turned the GWT dependencies into an OSGi bundle which can be shared across multiple applications. This final blog will further modularize our GWT sample into OSGi services using Spring Dynamic Modules. This should clearly demonstrate the benefits of OSGi modularity: The ability to remove and replace services at runtime, seamlessly managing multiple versions of bundles and how straightforward it is to deploy and manage using dm Server. This is the only blog in the series to actually use the Spring Framework. Spring is used to configure the Spring Dynamic Modules and publish and consume the OSGi services. It also demonstrates one mechanism of bridging the world of Spring-managed beans with GWT remoting. However, I’m well aware that Spring/GWT integration is a significant topic in itself, so I’m purposefully keeping to the one simple solution here. Please see Part 1 for the background to the GWT StockWatcher sample and the software I’m using. Also note that you can skip all these tedious instructions and zoom on down to the downloads summary at the bottom. Quick Catch Up In Part 1, we built the GWT StockWatcher sample application from scratch as an Eclipse project and then generated the code into a Dynamic Web project which was then deployed into dm Server. Finally, we exported the Dynamic Web project into a WAR file and deployed it outside of STS. In Part 2, we removed the GWT dependencies from the WAR file and turned them into an OSGi bundle which was installed into dm Server’s repository. Having done that, we were then able to deploy any number of applications which use GWT remoting without having to include any GWT dependencies in the WARs. In this final part, we’re going to further modularize the application using Spring Dynamic Modules. A question you might well be asking is… why? It’s not a bad question - no-one wants to unnecessarily complicate their code for the sake of it. In deciding whether a shared services approach is going to be helpful, you could boil it down some simple questions: - Is there any part of my application which may need to be used by another application? - Will different components in my application evolve at different rates? - Is it possible that I may need to maintain multiple concurrent versions of the same component? - Do I want to be able to deploy changes to components in my application while it’s running? Looking at our Stockwatcher application, it’s currently limited to just one stock market. We know that there are many different markets around the world, so to make it more flexible, it would certainly make sense to give it the capability to access different markets, so that’s what we’re going to do - turn our stock markets into shared services. One bundle will contain a shared API which defines what a market can do. Other bundles can then be implementations of that API - maybe one for London and one for New York. We’ll then have the capability to deploy the market we want, start it, stop it, undeploy it and replace it with another - all at runtime. I’m conscious that the process of transforming a single WAR file into shared services will likely be a far more common experience than designing it in a modular way from the ground up, so this blog will further build on what we did in Part 2, rather than start over. You can download the Eclipse projects from Part 2 here and the finished projects from all the work in Part 3 here (you’ll need to define GWT_ROOT_INSTALL classpath variable if you do). Hopefully, some of the suggestions I made in earlier blogs about dividing up the source code will start to finally make sense! Step 1: Create a StockService API bundle In order to create the London and New York StockWatcherService implementation bundles, we’ll need a common API for them both. Obviously modularizing common code is good practice, but there is a more important reason why we need a separate bundle for the APIs. If you want to undeploy bundles in a running application in dm Server, you can safely remove code which implements a service, but you can’t remove any interfaces which the remaining code has a dependency on. Therefore, it makes sense to put our service interface and common code into an API bundle which will be a dependency of the other bundles. To create a simple bundle project in Eclipse, right-click and select New->Other->SpringSource dm Server->Bundle Project. This will create a skeleton MANIFEST.MF for you to get started with. There’s no need for us to make this a Spring Dynamic Module as its sole purpose is to provide APIs and it won’t have any bean instances of its own, so a vanilla bundle is fine. - Specify StockWatcherServiceAPI as the project name - If you prefer, you can configure the source directory by clicking the Configure Default… link in Project Layout. For consistency, I’d set it to src/main/java - Set the bundle name and symbolic name to com.google.gwt.sample.stockwatcher.client.api and leave the version as 1.0.0 - The Module Type should be None and the Target Runtime should be your dm Server instance. If you don’t have the option to select this, you’ll need to click New and create one. - You can now click Finish and you should see your new bundle project with a generated MANIFEST.MF. Next thing to do is to move the API code into the new project. Drag and drop the entire com.google.gwt.sample.stockwatcher.client.api package from the StockWatcherWar project into the new bundle project. This is all the code which is shared by both the client and the server. It should contain the StockPriceService interface and the StockPrice and DelistedException classes. If this was successful, just about everything should be broken! Next, let’s think about what we need to export from the new bundle. Well, in this case it’s simple - the APIs from one package. In the Runtime tab of the MANIFEST.MF editor, add the package to the Exported Packages list and save. Now we need to think about which dependencies our API code requires, in other words, why it isn’t building anymore. Clearly, the GWT bundle we created in Part 2 would be a good start. In the Dependencies tab, add the com.google.gwt bundle to the Import Bundle list (if it’s not listed, it’s likely not in the dm Server repository, which is explained in Part 2) and save. This should fix all of the build problems in the new project, but the other projects should still be broken. For now, let’s ignore this as there’s still more refactoring to do. At this point, the the StockWatcherServiceAPI project should look like this: And its MANIFEST.MF should look like this: Step 2: Create a StockPriceService service bundle Having created an API bundle containing common code from which we can create multiple service implementations, let’s create one of those implementations. We’ll start with London. Create a new Bundle Project called “StockWatcherServiceLondon” using the instructions in Step 1 and give it a bundle name of com.google.gwt.sample.stockwatcher.service.london So let’s think about what this bundle is going to need. It needs code to implement a London stock price service and a way of exporting that service to the OSGi registry as a shared service. The first part is a simple refactoring job as we already have some service code we can reuse. For the second part, making the bundle into a Spring Dynamic Module is going to be the easiest approach as it makes publishing and consuming OSGi services using Spring beans very simple. First things first. Go back and look at the service implementation in StockPriceServiceImpl.java. You’ll see that the StockPriceService code is tied into the RemoteServiceServlet. The servlet needs to remain for the remoting to work, but we need to extract the implementation code and use it to create the shared service. Later on in Part 4, the servlet will need to be adapted to consume the service we’ve created. So, copy the com.google.gwt.sample.stockwatcher.server package from the StockWatcherWar project and paste it into the src/main/java of StockWatcherServiceLondon. You should now have a copy of StockPriceServiceImpl.java that can be refactored. Delete extends RemoteServiceServlet, the RemoteServiceServlet import and the SerialVersionUID field as none of these are now required. When you save the changes, you’ll see that we need to import some dependencies for the bundle - specifically, the API bundle created in Step 1. Edit MANIFEST.MF to add an Import Package for com.google.gwt.sample.stockwatcher.client.api. When you save, you should see an error in the manifest saying that the package can’t be resolved. What have we done wrong?! The problem is that when you have imports and exports between bundle projects in Eclipse, you need to tell the tooling to allow the projects to share references. Right-click on StockWatcherServiceLondon and select Properties->Project References. Tick the box for StockWatcherServiceAPI and then OK. You’ll then need to fake edit the MANIFEST.MF to pick up the change. You should now see StockWatcherServiceAPI listed as a Bundle Dependency. Let me make it clear that once a bundle has been exported from Eclipse, you don’t need to peform this extra step - it’s only when you want to create dependencies between bundles when they are just Projects. You should still have one more problem to solve. Although the com..client.api bundle imports the com.google.gwt bundle, the import is private to that bundle. Any bundle which imports the com..client.api bundle doesn’t inherit this dependency - it must be made explicitly. So in the MANIFEST.MF, add an Import-Bundle for the com.google.gwt bundle and save. If you still see a problem in StockPriceServiceImpl.java, you may need to try a fake edit on it, otherwise you’ve done something wrong. Brilliant! We’re now half way to creating a shared service. This is how it should currently look: The next step is to turn StockPriceServiceLondon into an OSGi service. We’re going to do this using Spring Dynamic Modules. The principle of a Spring Dynamic Module is simple - you provide Spring configuration files in a /META-INF/spring folder and an ApplicationContext is created for you when the bundle is deployed. It’s an excellent and very simple way of exporting Spring-managed beans as OSGi services. So, create a /spring folder in /META-INF (right-click on /META-INF and select New->Folder). Then right-click on the new /spring folder and select New->Spring Bean Definition. You can call this whatever you like, but in my example it’s called serviceLondon-config.xml. Enter the name and click Finish. We now need to define the StockPriceServiceImpl as a Spring bean, so that an instance of it is created when the bundle is deployed. If you’re familiar with Spring, this should be easy. If you’re not, you need to insert the following XML: <bean id=“stockPrices” class=“com.google.gwt.sample.stockwatcher.server.StockPriceServiceImpl”/> Finally, we need to export this bean as an OSGi service. We could do this in the same configuration file, but it’s good practice to keep OSGi dependencies separate. So create another Spring Bean Definition called osgi-config.xml and this time, click Next and select the osgi namespace checkbox before clicking Finish. To export the bean as an OSGi service, insert the following XML: <osgi:service interface=“com.google.gwt.sample.stockwatcher.client.api.StockPriceService” ref=“stockPrices”/> That’s it! We now have a bundle which creates a shared service. Before we move on, it would be a wise idea to test it, to make sure it’s behaving as we expect. Step 3: Testing a shared service bundle I’m going to describe two approaches to testing here: The quick and dirty method and a properly documented automated JUnit approach. Arguably this should be a blog posting all by itself, but I decided that it was an important enough subject to include here in detail. Quick and dirty test: The quick and dirty method is to hack up another Spring Dynamic Module which will consume the service, inject it into a test class and then just log the output. It’s a good way of making sure the service works, but obviously it’s not asserting anything and can’t run within a test framework. So we’ve already built one Spring Dynamic Module, so for this new TestConsumer module, I’m only going to highlight the differences. The module will need to import the com.google.gwt.sample.stockwatcher.client.api bundle (or package) and it will need some Java code to call the service client. You’ll need 2 spring config files in the /META-INF/spring folder: one for the test bean (eg. testConsumer-config.xml) <bean id=“consumer” class=“com.ben.consumer.Consumer”> <constructor-arg ref=“priceService”/> </bean> and one for the <osgi:reference to consume the service (eg. osgi-config.xml) <osgi:reference id=“priceService” interface=“com.google.gwt.sample.stockwatcher.client.api.StockPriceService”/> In the Java code, simply call service.getPrices(new String{“foo”, “bar”}) with some arbitrary strings and you should get back an array of StockPrice objects. If you want to log the output to System.err, this appears in dm Server’s trace directory: <dm server install directory>/servicability/trace/<name of your test bundle>/trace.log. To run your quick and dirty test in STS, you need to start dm Server. Then, drag and drop the entire StockWatcherServiceAPI project onto the server. When the project is deployed successfully, you should get a console message like this: Deployment of ‘com.google.gwt.sample.stockwatcher.client.api’ version ‘1’ completed. Next, drag and drop the entire StockPriceServiceLondon project onto the server and wait for the deployment message. If that all initializes succesfully, drop your test project onto the server. If you want to get the server to start clean each time (wipe the old trace output etc) then add a -Dcom.springsource.server.clean=true VM argument to the launch configuration, which you can find by double-clicking on the server instance, selecting Open Launch Configuration and clicking on the Arguments tab. If this isn’t working and you’re not sure why, you can download my entire Eclipse workspace here. Automated JUnit OSGi bundle test: You can read about the support for automated integration testing of Spring Dynamic modules in the Reference Guide here. The AbstractConfigurableBundleCreator test class will start an OSGi framework, load all the required test bundles and then build your test code as an on-the-fly bundle and test it. If you think that this maybe sounds complicated, you’d be right. Not wanting you to feel short-changed, dear reader, I tried this for myself and I have to be honest, setting it up to get all of the required dependencies working was a bit of a pain. So here is my step-by-step pain-free guide to getting this up and running. So out of the box, the test expects to get all of its dependencies from a local Maven repository. You can configure the test to use other types of dependency management, but for the sake of simplicity (heh!) I’m using Maven. Firstly, I installed m2eclipse which I got by adding as an update site (Help->Software Updates->Available Sites->Add Site). It’s advisable to unselect the optional Maven POM Editor, otherwise you’ll get a bunch of unsatisfied dependency warnings and it won’t install. Create a new Java Project called StockWatcherServiceTest (with src/main/java as the source folder). Right-click on the project and select Maven->Enable Dependency Management to Mavenize the project. Next, create a new package (com.ben in my case) and copy the SimpleOSGiTest code from the Spring DM Reference Guide into a test class in that package (note that the test framework will not allow you to use the default package). This test class will help us to sanity-check that the dependencies are all set up correctly. You’ll notice that the test won’t yet compile because we don’t yet have the dependencies it needs. You need to specify the correct dependencies in the pom.xml file. To save you hours of fist-shaking and blaspheme, you can view my pom.xml file here (included in the full workspace download here). Once you have the dependencies working, see if you can spot the typo in the sample code, add the java imports and you should now have a test class which compiles. Try running the test with right-click->Run As->JUnit Test. If you get a ClassNotFoundException, it’s because the test is looking in the wrong place for the .class files. The easiest way to fix this is to override getRootPath() and return “file:./target/classes” or whatever path your .class files are being generated into. Hopefully you should now have a green bar, which has at least proved that the test case started up the Equinox framework and was able to create an on-the-fly bundle successfully. Now we can create our proper unit test for the StockWatcherServiceLondon bundle. Next step is to get the test dependency bundles loading and resolving correctly. To do this, the required bundles need to be installed in the local maven repository and specified in getTestBundlesNames() in the testcase. Let’s deal with Maven first: The pom.xml file created earlier should have downloaded the external dependencies, but we will need to add the com..client.api, com..service.london and com.google.gwt bundles to the maven repository manually. I did this for the bundles in the workspace by mavenizing the projects and creating a pom.xml file for each. There are a couple of gotchas here. Firstly, to get Maven to pick up the MANIFEST.MF files, you have to configure maven-jar-plugin in the pom.xml and point it at the location of the file. See my pom.xml here. Secondly, I discovered that the Spring dm Server proprietary Import-Bundle manifest entry doesn’t work, so I had to change both projects to use a number of Import-Package entries instead. Once you have made these changes, open a command-prompt, change to the project directory and type mvn install. This will build the plugin and install it into the local maven repository. To install the gwt plugin we build in Part 2 I used the following command-line: mvn install:install-file -DgroupId=com.google.gwt -DartifactId=com.google.gwt -Dversion=1.5.3 -Dpackaging=maven-plugin -Dfile=<location of jar file>. It’s worth checking at this point that the plugins built correctly by unzipping the jars in the repository and checking that the manifests look sane. Once all of the test bundles are built and installed into the repository, they need to be specified in the testcase in a getTestBundlesNames() method. The syntax is described in the reference documentation and I specified 4 bundles: the com..client.api, com..service.london, com.google.gwt and javax.servlet. Hopefully at this point, you’ll be able to run the test without any actual assertions, but you should be successfully getting it to install and start all of the test bundles without any problems. If it’s not working, a useful debug tool is to enable the Equinox telnet console and use that to check that the expected imports and exports all appear to be correct. If you’re getting ClassNotFoundException, they’re probably not. The way to enable the console is overriding createPlatform() and calling System.setProperty(“osgi.console”, “9000”). Once all of the bundles are resolving and starting correctly, you’re finally ready to write some assertions! We need to consume the service that’s exported by the com..service.london bundle. Copy the <osgi:reference line from the quick and dirty example above into an osgi-config.xml file and then add this to a getConfigLocations() method in the testcase, which returns a String array of config files. In this particular case, we’re just going to make it a vanilla OSGi bundle rather than a Spring Dynamic Module, so no need to put the file in a /spring folder. In an integration test, Spring will autowire all spring-managed beans into a testcase where there’s an appropriate setter, so adding a setStockPriceService(StockPriceService s) method will inject the OSGi service into the test class. Once you have a reference to the service, you can start to assert things on it… finally! Now, I realise that this is a very verbose description of what is ultimately a simple concept. Frankly, the best way to understand the configuration is to look at my Eclipse workspace here. I’ve described it in detail because it’s one thing to see an example of something that works and it’s quite another to understand the processes and pitfalls of how to get there. However, once set up, this is an extremely powerful way of integration testing bundle services in an automated manner. Step 4: Consuming the shared service In Step 2, we created an API bundle and a shared service bundle. We now need to refactor our other projects to work with these bundles, both at compile-time and runtime. At the end of Step 2, you’ll have broken some dependencies in the StockWatcherWar and StockWatcher projects. Let’s fix these first. Starting with StockWatcher, it’s broken because we removed the APIs from the WAR project it depended on. Change its build-time dependency so that it references the StockWatcherServiceAPI project (right-click->Build Path->Configure Build Path->Projects). There are no runtime dependencies for this project, so this is all that needs to be done. StockWatcherWar is now going to need some modification to consume the OSGi service we created in Step 2. Currently, it still contains the service code that we copied over to the service bundle in Step 2, so one job to do is to remove this code and replace it with a call which delegates to the OSGi service instead. However, this means that we’ll need to somehow pass a reference to the service bundle into the class… which also means that we actually need to get an instance of it from somewhere. Let’s deconstruct this further. Ok, so we already know how to get a reference to the service because we did it in the test methods above: Create a spring configuration file and add an <osgi:reference tag. So let’s start by doing this. Create a new Spring Bean definition in /META-INF (right-click->New->Spring Bean Definition) call it whatever you like (eg osgi-config.xml) and make sure to select the osgi namespace checkbox to get the schema included. Add the <osgi:reference example code from Step 3 and you now have a Spring configuration file to consume the OSGi service. Should this configuration file go into a /spring subdirectory to make it a Spring Dynamic Module? The correct answer is no. We need to use this to bootstrap a specific flavour of ApplicationContext defined in web.xml, so we don’t need another one being created automatically for us. While we’re on the subject, edit web.xml and add the following lines: <context-param> <param-name>contextClass</param-name> <param-value>com.springsource.server.web.dm.ServerOsgiBundleXmlWebApplicationContext</param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/META-INF/*.xml</param-value> </context-param></pre> This will bootstrap a dm Server ApplicationContext and use all of the xml files in /META-INF to configure it. Next, we need to work out how to pass this Spring-managed service instance to the StockPriceServiceImpl class, in other words, a way of bridging the Spring and servlet worlds. One way to do this is to look up the ApplicationContext in the ServletContext and call getBean() on it. This isn’t great because it requires us to hard-code a bean name into the service code and rely on dependency lookup. A cleaner mechanism is to use a Spring-managed ServletFilter, inject the service instance into it and then pass that through to the servlet using a static constant for the lookup. This is what I have implemented here: This class should be created in the StockWatcherWar project along with the StockPriceServiceImpl class. Also, don’t forget to delegate to the FilterChain in doFilter(). If you’ve been following the instructions carefully, neither this class or StockPriceServiceImpl will compile at this point, due to the refactoring in Step 1. To fix this, you need to add the StockWatcherServiceAPI as a project reference to StockWatcherWar (right-click->Properties->Project References) and then add com.google.gwt.sample.stockwatcher.client.api to the Import-Bundle list. With this change, the code should now compile. Now that we have a bridge between the servlet and the ApplicationContext, all we need to do is to create an instance of the ServletFilter and inject the service into it. Create another spring bean definition in /META-INF, call it whatever you like (eg. stockwatcher-config.xml). Copy the following configuration into it (assuming that the bean name for the OSGi service you used was priceService):<bean id=“myFilter” class=“com.google.gwt.sample.stockwatcher.server.StockPriceServiceFilter”> <constructor-arg ref=“priceService”/> </bean> Finally, we need a little magic trick called a DelegatingFilterProxy. This class delegates all incoming and out-going servlet requests to a Spring-managed ServletFilter and is defined in web.xml. Copy this code into your web.xml (it assumes that the bean name for your ServletFilter is myFilter)<filter> <filter-name>myFilter</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> <init-param> <param-name>targetFilterLifecycle</param-name> <param-value>true</param-value> </init-param> </filter> <filter-mapping> <filter-name>myFilter</filter-name> <servlet-name>StockService</servlet-name> </filter-mapping></pre> The pieces of the jigsaw are finally coming together. You’ll see that in the code above we also mapped the DelegatingFilterProxy to the StockService servlet. The web.xml configuration is now complete. All that remains now is to get hold of the service instance in the StockPriceServiceImpl code and use it! Given that we seem to have thought of everything, try deploying the WAR project to dm Server, following the same instructions used for the quick and dirty test in Step 3: Drop the API project onto the Server, then the London Service project and finally the StockWatcherWar project. You should see the following: [2008-12-19 17:46:23.504] onnection(4)-192.168.1.5 <SPDE0010I> Deployment of ‘com.google.gwt.sample.stockwatcher.client.api’ version ‘1’ completed. [2008-12-19 17:46:44.125] onnection(4)-192.168.1.5 <SPDE0010I> Deployment of ‘StockWatcherServiceLondon’ version ‘1’ completed. [2008-12-19 17:47:03.528] onnection(4)-192.168.1.5 <SPSC1000I> Creating web application ‘/StockWatcherWar’. [2008-12-19 17:47:03.544] async-delivery-thread-1 <SPSC1001I> Starting web application ‘/StockWatcherWar’. [2008-12-19 17:47:03.676] async-delivery-thread-1 <SPSC1005E> Failed to start web application ‘/StockWatcherWar’: consult the Server trace.log for further details. [2008-12-19 17:47:03.680] async-delivery-thread-1 <SPSC1002I> Removing web application ‘/StockWatcherWar’. [2008-12-19 17:47:03.752] onnection(4)-192.168.1.5 <SPDE0011E> Deployment of ‘com.google.sample.stockwatcher’ version ‘1’ failed. Deployment failed? After all that? Gadzooks! What’s wrong here? Have a look at the trace file in <dm Server installation>/serviceability/trace/com.google.sample.stockwatcher-1/trace.log. You should see a ClassNotFoundException: org.springframework.web.context.ContextLoaderListener. The problem is that, now that we’re in the world of explicit imports and exports, we need to explicitly import the bundles that give us the Spring support in the WAR file, so add the following bundles to the Import-Bundle entry: org.springframework.context, org.springframework.core, org.springframework.beans and org.springframework.web. It should now look like this: Now try to deploy it again. This time, I promise, it should work. Don’t move onto Step 5 until it does. Step 5: Celebrate Raid the mini-bar, make a cup of tea, do a little dance or punch the air, whatever is appropriate for your culture and budget. Step 6: Hot-swap different services Now it’s time to show off one of dm Server’s party pieces. First, let’s create another stock market service we can swap in. This is not much more than a copy/paste job. Right-click on the StockWatcherServiceLondon project, select Copy and then Paste. Name the copied project StockWatcherServiceNewYork. The things to edit are as follows: - MANIFEST.MF - update the bundle name and symbolic name from London to NewYork. - serviceLondon-config.xml - rename, but no need to change any of the contents - StockPriceServiceImpl.java - change the prices considerably so that it’s obvious we’re using a different stock market. I just added 500 to each price. Go Dow Jones! Now for the fun part. Start up the StockWatcherWar with the London stock service as before and add a few stocks so that you can see it working: Then right-click on the StockWatcherServiceLondon entry in the Servers view and select Remove. This will undeploy the StockWatcherServiceLondon bundle from the server. You’ll now see the application pause. The call to the remote service will block until dm Server has polled for a replacement service. So let’s give it one. Take the StockPriceServiceNewYork project and drop it onto the server. Wait a couple of seconds and… hey presto… the application is now using the New York service (note the significant increase in prices). Step 7: Deploy outside of STS Deploying the application outside of STS is a simple matter of exporting the bundles. The technique differs depending on whether or not you are using Maven and whether Maven is automatically doing the job for you. Using Export->Jar File on a Maven project will fail unless you only export the /src directory. You’ll also need to select “Use existing manifest from workspace” regardless of whether you are using Maven or not. Once the bundles are exported, there are a couple of ways of packaging them and then a couple of ways to deploy them. In terms of packaging, you can leave them as individual bundles, or you can combine them all into a single PAR file. A PAR file looks like a bundle which contains bundles and is described here. The significant benefit of using a PAR file is that runs its bundles in a “scope”, so it is completely isolated from other bundles running in other applications. However, if you deploy the application as a PAR, you can’t then hot-swap individual bundles within it. Once you’ve decided on the deployment mode, you can either upload the file(s) using the admin console or you can copy them into the <dm Server installation>/pickup directory. The first time you do this, the server needs to be already running and you should copy the bundles into the directory in the correct order. dm Server remembers the order for the next time you start the server. The advantage of using this mechanism is that you can hot-deploy or undeploy individual bundles simply by moving them in and out of this directory. Download Summary I realise I’ve been scattering the downloads throughout the blog, so I thought I’d summarise them here. Also, apologies for zipping up small files, but the blogging tool won’t allow me to upload raw XML or Java files for security reasons. Have fun!
http://spring.io/blog/2008/12/19/deploying-gwt-applications-in-springsource-dm-server-part-3/
CC-MAIN-2015-48
refinedweb
5,431
55.44
CodePlexProject Hosting for Open Source Software Hello,I am trying to get the samples working with WPF using (Farseersimplesamples_v5_56959\Demo1) I am using the class library version of farseer. My application cannot find the namespace FarseerPhysics.Mathematics; is there a class library version of this, or how can i get round this? Maybe i am asking the wrong question. Maybe Farseer is not for WPF. I found this old thread which had some code examples posted which now has dead links: Has anyone had any small success with WPF, that they may be willing to post a short example of code with WPF and farseer? thanks. Farseer Physics Engine 3.0 should work on WPF. At least, it does work here. Could you send me your project? upload it somewhere or contact me using codeplex and I will reply with my email. Thank u very much if i just got started im sure i'd be fine. I've just triedstarting again with something even more simple - creating a body. it doesn't seem to work again. I'm using farseerphysics3.dll, and trying some stuff in so ive just pasted in this code to a new project: int mass = 1; float width = 128; float height = 128; Body rectBody = new Body(); rectBody.Mass = mass; rectBody.MomentOfInertia = mass * (width * width + height * height) / 12; rectBody.Position = new Vector2(100, 200); The error in this case is pharseerphysics.dynamics.body has no constructors defined. heres the project file: .... ah just in case heres the other project i attempted badly: ok i've found out that I shouldnt use the dll files, but include the directories. also i should have installed xna to understand those examples first, before moving on to wpf. I've got a moving rectangle!! (but gravity does not seem to work) Using the dispatch timer instead of the storyboard they use in the silverlight examples. the storyboard didn't work for some reason. I am having trouble with gravity. my rectangle moves down now, but does not seem to accelerate. The world has been created with gravity though. Im not sure if i need a gravityController, though i have tried that. heres my code so far. { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { World _world = new World(new Vector2(0, -20)); Body _rectBody; DebugView _debugView; DispatcherTimer dispatcherTimer = new DispatcherTimer(); public MainWindow() { InitializeComponent(); // Set up the timer for game updates dispatcherTimer.Tick += new EventHandler(dispatcherTimer_Tick); dispatcherTimer.Interval = new TimeSpan(0, 0, 0,0,100); dispatcherTimer.Start(); // create a body and place it in position defined by Vector2 _rectBody = BodyFactory.CreateBody(_world,new Vector2(5, 50)); _rectBody.Mass = 100; _rectBody.LinearDamping = 0; _rectBody.BodyType = BodyType.Dynamic; label1.Content = _world.Gravity.Y; _rectBody.IgnoreGravity = false; } void dispatcherTimer_Tick(object sender, EventArgs e) { _world.Step(0.1f); label2.Content = this._rectBody.Position.Y; translateTransform.Y = 50 -_rectBody.Position.Y; } } } My goal firstly is still to get a simple example working with WPF (before starting my proper project). when i get this done, i will post the code, so hopefully its easier for the next person who tries this. Any help with gravity would be much appreciated. Well, after some days trying to get WPF working, i stumble across and FARSEER Physics2 HAS WPF SAMPLES! I was trying to use version 3, and didn't know that version 2 was there with samples! I had thought that nothing was made for WPF! Now I know why no one wanted to help, or why Genbox didn't understand what the problem was. thanks :) it was very helpfull, why did the WPF examples didn't make it to version 3? If you have ported the WPF samples to FPE 3.1, feel free to submit a patch with the project. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://farseerphysics.codeplex.com/discussions/227622
CC-MAIN-2017-22
refinedweb
663
68.67
rank_loss¶ - paddle.fluid.layers.loss. rank_loss ( label, left, right, name=None ) [source] This operator implements the sort loss layer in the RankNet model. RankNet is a pairwise ranking model with a training sample consisting of a pair of documents (A and B), The label (P) indicates whether A is ranked higher than B or not. Please refer to more details: RankNet Rank loss layer takes three inputs: left ( \(o_i\) ), right ( \(o_j\) ) and label ( \(P_{i,j}\) ). The inputs respectively represent RankNet’s output scores for documents A and B and the value of label P. Rank loss layer takes batch inputs with size batch_size (batch_size >= 1), P = {0, 1} or {0, 0.5, 1}, where 0.5 means that there is no information about the rank of the input pair. The following equation computes rank loss C_{i,j} from the inputs:\[\begin{split}C_{i,j} &= -\\tilde{P_{ij}} * o_{i,j} + \log(1 + e^{o_{i,j}}) \\\\\end{split}\]\[\begin{split}o_{i,j} &= o_i - o_j \\\\\end{split}\]\[\begin{split}\\tilde{P_{i,j}} &= \\left \{0, 0.5, 1 \\right \} \ or \ \\left \{0, 1 \\right \}\end{split}\] - Parameters label (Variable) – 2-D Tensorwith the shape of \([batch,1]\), the data type is float32, batch indicates the size of the data. Indicats whether A ranked higher than B or not. left (Variable) – 2-D Tensorwith the shape of \([batch,1]\), the data type is float32. RankNet’s output score for doc A. right (Variable) – 2-D Tensorwith the shape of \([batch,1]\), the data type is float32. RankNet’s output score for doc B. name (str|None) – The default value is None. Normally there is no need for user to set this property. For more information, please refer to Name . - Returns Tensorindicating the output value of the sort loss layer, the data type is float32, and the return value’s shape is \([batch,1]\) . - Return type Variable - Raises ValueError – Any of label, left, and right is not a Variable. Examples import paddle.fluid as fluid import paddle paddle.enable_static() label = fluid.data(name="label", shape=[-1, 1], dtype="float32") left = fluid.data(name="left", shape=[-1, 1], dtype="float32") right = fluid.data(name="right", shape=[-1, 1], dtype="float32") out = fluid.layers.rank_loss(label, left, right)
https://www.paddlepaddle.org.cn/documentation/docs/en/api/paddle/fluid/layers/loss/rank_loss_en.html
CC-MAIN-2021-31
refinedweb
377
67.96
How to receive from channel till it has values in GO - go A function has loop which call a go routine inside it with a channel passed into it. After this, I try receiving from channel till it has values. The go function passes value in channel upon each call. My channel runs infinitely. func (m *StreamsDAO) FindOutput(input model.Input) ([]model.Output, error) { // SOME CODE var chanNumber int = (input.EndTime - input.StartTime)/60 outputChan := make(chan model.Output, chanNumber) for i := input.StartTime; i < input.EndTime ;i = i+(slider*60) { // SOME CODE go ForEachSlide(i, outputChan) for outputC := range outputChan { outputs = append(outputs, outputC) } } return outputs, err } func ForEachSlide(i int, outputChan chan model.Output) { // SOME CODE outputChan <- output // close(outputChan) } Its looping infinite as there in no stop for receiving from channel. If close channel is used, only single element is received from channel thats as expected, but I need all values from the channel. Related Streaming data over go channel I'm trying to build a function that I pass a channel to and when run in a go routine it will constantly post updates (in this instance, values of sin) to a channel. When data is sent the the channel, I then want to send it over a web socket. func sineWave(value chan float64) { var div float64 sinMult := 6.2839 i := 0 log.Println("started") for { div = (float64(i+1) / sinMult) log.Println(math.Sin(div)) time.Sleep(100 * time.Millisecond) value <- math.Sin(div) // log.Println() i++ if i == 45 { i = 0 } } // log.Println(math.Sin(div * math.Pi)) } It seems to get stuck at value <- main.Sin(div) stopping the rest of main() from running. How do i get sineWave to run indefinitely in the background and to print its output in another function as they arrive? There is several mistakes in this code, the value chan is never drained, so any write will block the value chan is never closed, so any drain will be infinite A channel must always be drained, a channel must be closed at some point. Also, please post reproducible examples, otherwise it is difficult to diagnose the issue. This is a slightly modified but working version of the OP code. package main import ( "fmt" "math" "time" ) func sineWave(value chan float64) { defer close(value) // A channel must always be closed by the writer. var div float64 sinMult := 6.2839 i := 0 fmt.Println("started") for { div = (float64(i+1) / sinMult) time.Sleep(100 * time.Millisecond) value <- math.Sin(div) i++ if i == 4 { // i = 0 // commented in order to quit the loop, thus close the channel, thus end the main for loop break } } } func main() { value := make(chan float64) go sineWave(value) // start writing the values in a different routine // drain the channel, it will end the loop whe nthe channel is closed for v := range value { fmt.Println(v) } } Receiving values from goroutine for certain amount of time I have a goroutine which can generate an infinite number of values (each more suitable than the last), but it takes progressively longer to find each values. I'm trying to find a way to add a time limit, say 10 seconds, after which my function does something with the best value received so far. This is my current "solution", using a channel and timer: // the goroutine which runs infinitely // (or at least a very long time for high values of depth) func runSearch(depth int, ch chan int) { for i := 1; i <= depth; i++ { fmt.Printf("Searching to depth %v\n", i) ch <- search(i) } } // consumes progressively better values until the channel is closed func awaitBestResult(ch chan int) { var result int for result := range ch { best = result } // do something with best result here } // run both consumer and producer func main() { timer := time.NewTimer(time.Millisecond * 2000) ch := make(chan int) go runSearch(1000, ch) go awaitBestResult(ch) <-timer.C close(ch) } This mostly works - the best result is processed after the timer ends and the channel is closed. However, I then get a panic (panic: send on closed channel) from the runSearch goroutine, since the channel has been closed by the main function. How can I stop the first goroutine running after the timer has completed? Any help is very appreciated. You need to ensure that the goroutine knows when it is done processing, so that it doesn't attempt to write to a closed channel, and panic. This sounds like a perfect case for the context package: func runSearch(ctx context.Context, depth int, ch chan int) { for i := 1; i <= depth; i++ { select { case <- ctx.Done() // Context cancelled, return return default: } fmt.Printf("Searching to depth %v\n", i) ch <- search(i) } } Then in main(): // run both consumer and producer func main() { ctx := context.WithTimeout(context.Background, 2 * time.Second) ch := make(chan int) go runSearch(ctx, 1000, ch) go awaitBestResult(ch) close(ch) } You are getting a panic because your sending goroutine runSearch apparently outlives the timer and it is trying to send a value on the channel which is already closed by your main goroutine. You need to devise a way to signal the sending go routine not to send any values once your timer is lapsed and before you close the channel in main. On the other hand if your search gets over sooner you also need to communicate to main to move on. You can use one channel and synchronize so that there are no race conditions. And finally you need to know when your consumer has processed all the data before you can exit main. Here's something which may help. package main import ( "fmt" "sync" "time" ) var mu sync.Mutex //To protect the stopped variable which will decide if a value is to be sent on the signalling channel var stopped bool func search(i int) int { time.Sleep(1 * time.Millisecond) return (i + 1) } // (or at least a very long time for high values of depth) func runSearch(depth int, ch chan int, stopSearch chan bool) { for i := 1; i <= depth; i++ { fmt.Printf("Searching to depth %v\n", i) n := search(i) select { case <-stopSearch: fmt.Println("Timer over! Searched till ", i) return default: } ch <- n fmt.Printf("Sent depth %v result for processing\n", i) } mu.Lock() //To avoid race condition with timer also being //completed at the same time as execution of this code if stopped == false { stopped = true stopSearch <- true fmt.Println("Search completed") } mu.Unlock() } // consumes progressively better values until the channel is closed func awaitBestResult(ch chan int, doneProcessing chan bool) { var best int for result := range ch { best = result } fmt.Println("Best result ", best) // do something with best result here //and communicate to main when you are done processing the result doneProcessing <- true } func main() { doneProcessing := make(chan bool) stopSearch := make(chan bool) // timer := time.NewTimer(time.Millisecond * 2000) timer := time.NewTimer(time.Millisecond * 12) ch := make(chan int) go runSearch(1000, ch, stopSearch) go awaitBestResult(ch, doneProcessing) select { case <-timer.C: //If at the same time runsearch is also completed and trying to send a value ! //So we hold a lock before sending value on the channel mu.Lock() if stopped == false { stopped = true stopSearch <- true fmt.Println("Timer expired") } mu.Unlock() case <-stopSearch: fmt.Println("runsearch goroutine completed") } close(ch) //Wait for your consumer to complete processing <-doneProcessing //Safe to exit now } On playground. Change the value of timer to observe both the scenarios. goroutine that takes a channel receiver and sends string to channel I am trying to make a loop of goroutines that take a channel that receives strings, and every time it is received it should append the value to another string. Only at the end of all goroutines (the goroutine count should be the length of the list passed in), should the code continue on. My example below doesn't seem to append the values from the strReceiver channel onto str, because str is never modified. Anyone know what's wrong? func appendToStr(str string, list []string, origin *url.URL) { var currProc int32 = 0; var maxProc int32 = int32(len(list)) var strReceiver := make(chan string, len(list)) for _, item := range list { go func() { doAsyncAndIncrement(item, strReceiver, &currProc) str += <-strReceiver }() } for { if atomic.LoadInt32(&currProc) <= maxProc { break; } } // continue on using 'str' which now contains the append values from the 'strReceiver' channel } func doAsyncAndIncrement(item string, receiver chan<- string, count *int32) { defer atomic.AddInt32(count, 1) var val string // do something with 'item' and set 'val'... receiver <- val } One problem with your code is that the closure around your go routine invocation is too big. for _, item := range list { go func() { doAsyncAndIncrement(item, strReceiver, &currProc) str += <-strReceiver }() } item is scoped to the for loop, not the anonymous function in your goroutine, so while you firing off N goroutines, your item variable meanwhile is being updated in a for loop. To remedy this, pass the variable to your goroutine explicitly, to avoid using a closure: for _, item := range list { go func(item string) { doAsyncAndIncrement(item, strReceiver, &currProc) str += <-strReceiver }(item) } Go routine not receiving all data sent through channel — toy example program I'm just playing around with Go, taking it for a test drive so to speak. I'm having a problem where a go routine that is mean to receive 3 integers only seems to receive one. type simpleFunction func() int func run(fChan chan simpleFunction, result chan int) { for{ select { case fn := <-fChan: fmt.Printf("sending: %d down result chan\n", fn()) result <- fn() case <-time.After(time.Second * 2): close(fChan) } } } func recieve(result chan int){ for { select { case x := <-result: fmt.Printf("recieved: %d from result chan\n", x) case <-time.After(time.Second * 2): close(result) } } } So, as you can see the run routine receives functions, evaluates them, and then sends the result down the result channel. Here's my main/test: func main() { fns := []simpleFunction{ func() int {return 1}, func() int {return 2}, func() int {return 3}, } fChan := make(chan simpleFunction) result := make(chan int) go run(fChan, result) go recieve(result) for _, fn := range fns { fmt.Printf("sending a function that returns: %d down function chan\n", fn()) fChan <- fn } } And here's my output: sending a function that returns: 1 down function chan sending: 1 down result chan recieved: 1 from result chan sending a function that returns: 2 down function chan sending a function that returns: 3 down function chan sending: 2 down result chan sending: 3 down result chan So, as you can see, everything seems to go swimmingly for the first function, but it's not so hot afterwards. Any tips or suggestions? There are a couple of issues with this code: The program terminates when main returns. It does not wait for the run and receive goroutines to complete. There's a race on closing the channels. There's no guarantee that the sender will top sending before the timeout. If main does not exit, then the for { select { } } loops will spin forever printing zero values. Receive on a closed channel returns the zero value. Why never reached the return statement Look at the following code snippet package main import ( "fmt" "time" ) func sender(ch chan string) { ch <- "Hello" ch <- "Foo" ch <- "and" ch <- "Boo" close(ch) } func main() { ch := make(chan string) go sender(ch) for { select { case value := <-ch: fmt.Println(value) case <-time.After(time.Second * 2): fmt.Println("Return") return } } } As result I've got blank output and the time.After will be never reached. Why? I notice, when I try to receive value from a closed channel, it will receive the zero value from the type. Why can I still receive value from a closed channel? I can check also like is too, v, ok := <-ch if ok is false, the channel is closed. A new two second timer is created on every iteration of the for loop. Closed channels are always ready to receive. The code loops forever because the channel for the new timer is never ready to receive before the closed channel is ready to receive. One way to fix the problem is to set the channel to nil: case value, ok := <-ch: if !ok { ch = nil } else { fmt.Println(value) } Receive on a nil channel is never ready. playground example If you wanted the loop to run for at most two seconds, then you should create the timer outside of the loop: after := time.After(time.Second * 2) and select on this one timer in the loop: case <-after: fmt.Println("Return") return playground example (sleep added to make example run on playground) You can combine setting the channel to nil and creating the timer outside of the loop. playground example
https://jquery.developreference.com/article/10000699/How+to+receive+from+channel+till+it+has+values+in+GO
CC-MAIN-2020-40
refinedweb
2,130
62.78
-------------------------------------------------------------------------------- {-| Module : Queue Copyright : (c) Daan Leijen 2002 License : BSD-style Maintainer : daan@cs.uu.nl Stability : provisional Portability : portable An efficient implementation of queues (FIFO buffers). Based on: * Chris Okasaki, \"/Simple and Efficient Purely Functional Queues and Deques/\", Journal of Functional Programming 5(4):583-592, October 1995. -} ---------------------------------------------------------------------------------} module UU.DData.Queue ( -- * Queue type Queue -- instance Eq,Show -- * Operators , (<>) -- * Query , isEmpty , length , head , tail , front -- * Construction , empty , single , insert , append -- * Filter , filter , partition -- * Fold , foldL , foldR -- * Conversion , elems -- ** List , toList , fromList ) where import qualified Prelude as P (length,filter) import Prelude hiding (length,head,tail,filter) import qualified List -- just for testing -- import QuickCheck {-------------------------------------------------------------------- Operators --------------------------------------------------------------------} infixr 5 <> -- | /O(n)/. Append two queues, see 'append'. (<>) :: Queue a -> Queue a -> Queue a s <> t = append s t {-------------------------------------------------------------------- Queue. Invariants for @(Queue xs ys zs)@: * @length ys <= length xs@ * @length zs == length xs - length ys@ --------------------------------------------------------------------} -- A queue of elements @a@. data Queue a = Queue [a] [a] [a] {-------------------------------------------------------------------- Query --------------------------------------------------------------------} -- | /O(1)/. Is the queue empty? isEmpty :: Queue a -> Bool isEmpty (Queue xs ys zs) = null xs -- | /O(n)/. The number of elements in the queue. length :: Queue a -> Int length (Queue xs ys zs) = P.length xs + P.length ys -- | /O(1)/. The element in front of the queue. Raises an error -- when the queue is empty. head :: Queue a -> a head (Queue xs ys zs) = case xs of (x:xx) -> x [] -> error "Queue.head: empty queue" -- | /O(1)/. The tail of the queue. -- Raises an error when the queue is empty. tail :: Queue a -> Queue a tail (Queue xs ys zs) = case xs of (x:xx) -> queue xx ys zs [] -> error "Queue.tail: empty queue" -- | /O(1)/. The head and tail of the queue. front :: Queue a -> Maybe (a,Queue a) front (Queue xs ys zs) = case xs of (x:xx) -> Just (x,queue xx ys zs) [] -> Nothing {-------------------------------------------------------------------- Construction --------------------------------------------------------------------} -- | /O(1)/. The empty queue. empty :: Queue a empty = Queue [] [] [] -- | /O(1)/. A queue of one element. single :: a -> Queue a single x = Queue [x] [] [x] -- | /O(1)/. Insert an element at the back of a queue. insert :: a -> Queue a -> Queue a insert x (Queue xs ys zs) = queue xs (x:ys) zs -- | /O(n)/. Append two queues. append :: Queue a -> Queue a -> Queue a append (Queue xs1 ys1 zs1) (Queue xs2 ys2 zs2) = Queue (xs1++xs2) (ys1++ys2) (zs1++zs2) {-------------------------------------------------------------------- Filter --------------------------------------------------------------------} -- | /O(n)/. Filter elements according to some predicate. filter :: (a -> Bool) -> Queue a -> Queue a filter pred (Queue xs ys zs) = balance xs' ys' where xs' = P.filter pred xs ys' = P.filter pred ys -- | /O(n)/. Partition the elements according to some predicate. partition :: (a -> Bool) -> Queue a -> (Queue a,Queue a) partition pred (Queue xs ys zs) = (balance xs1 ys1, balance xs2 ys2) where (xs1,xs2) = List.partition pred xs (ys1,ys2) = List.partition pred ys {-------------------------------------------------------------------- Fold --------------------------------------------------------------------} -- | /O(n)/. Fold over the elements from left to right (ie. head to tail). foldL :: (b -> a -> b) -> b -> Queue a -> b foldL f z (Queue xs ys zs) = foldr (flip f) (foldl f z xs) ys -- | /O(n)/. Fold over the elements from right to left (ie. tail to head). foldR :: (a -> b -> b) -> b -> Queue a -> b foldR f z (Queue xs ys zs) = foldr f (foldl (flip f) z ys) xs {-------------------------------------------------------------------- Conversion --------------------------------------------------------------------} -- | /O(n)/. The elements of a queue. elems :: Queue a -> [a] elems q = toList q -- | /O(n)/. Convert to a list. toList :: Queue a -> [a] toList (Queue xs ys zs) = xs ++ reverse ys -- | /O(n)/. Convert from a list. fromList :: [a] -> Queue a fromList xs = Queue xs [] xs {-------------------------------------------------------------------- instance Eq, Show --------------------------------------------------------------------} instance Eq a => Eq (Queue a) where q1 == q2 = toList q1 == toList q2 instance Show a => Show (Queue a) where showsPrec d q = showsPrec d (toList q) {-------------------------------------------------------------------- Smart constructor: Note that @(queue xs ys zs)@ is always called with @(length zs == length xs - length ys + 1)@. and thus @rotate@ is always called when @(length xs == length ys+1)@. --------------------------------------------------------------------} balance :: [a] -> [a] -> Queue a balance xs ys = Queue qs [] qs where qs = xs ++ reverse ys queue :: [a] -> [a] -> [a] -> Queue a queue xs ys (z:zs) = Queue xs ys zs queue xs ys [] = Queue qs [] qs where qs = rotate xs ys [] -- @(rotate xs ys []) == xs ++ reverse ys)@ rotate :: [a] -> [a] -> [a] -> [a] rotate [] [y] zs = y:zs rotate (x:xs) (y:ys) zs = x:rotate xs ys (y:zs) rotate xs ys zs = error "Queue.rotate: unbalanced queue" valid :: Queue a -> Bool valid (Queue xs ys zs) = (P.length zs == P.length xs - P.length ys) && (P.length ys <= P.length xs) {- {-------------------------------------------------------------------- QuickCheck --------------------------------------------------------------------} qcheck prop = check config prop where config = Config { configMaxTest = 500 , configMaxFail = 10000 , configSize = \n -> (div n 2 + 3) , configEvery = \n args -> let s = show n in s ++ [ '\b' | _ <- s ] } {-------------------------------------------------------------------- Arbitrary, reasonably balanced queues --------------------------------------------------------------------} instance Arbitrary a => Arbitrary (Queue a) where arbitrary = do{ qs <- arbitrary ; let (ys,xs) = splitAt (P.length qs `div` 2) qs ; return (Queue xs ys (xs ++ reverse ys)) } prop_Valid :: Queue Int -> Bool prop_Valid q = valid q prop_InsertLast :: [Int] -> Property prop_InsertLast xs = not (null xs) ==> head (foldr insert empty xs) == last xs prop_InsertValid :: [Int] -> Bool prop_InsertValid xs = valid (foldr insert empty xs) prop_Queue :: [Int] -> Bool prop_Queue xs = toList (foldl (flip insert) empty xs) == foldr (:) [] xs prop_List :: [Int] -> Bool prop_List xs = toList (fromList xs) == xs prop_TailValid :: [Int] -> Bool prop_TailValid xs = valid (tail (foldr insert empty (1:xs))) -}
http://hackage.haskell.org/package/uulib-0.9.5/docs/src/UU-DData-Queue.html
CC-MAIN-2014-15
refinedweb
885
83.25
13 August 2008 23:59 [Source: ICIS news] LONDON (ICIS news)--The European paraffin wax (p-wax) market saw average price increases of €80/tonne ($119/tonne) across the board for August on tighter supply, buyers and sellers said on Wednesday. Price hikes of up to €100/tonne from July were heard, but this was not widely confirmed. Despite the fall in raw material costs, the lack of Chinese imports added to the shortage of product in ?xml:namespace> Demand was also strong and it was expected to increase after the end of the holiday season in August, one trader said. Accordingly, domestic prices for 52-54°C melt point grade were now at levels of €1,160-1,220/tonne FD (free delivered) NWE (northwest Looking ahead to September, one producer announced that it was postponing planned hikes of €50-70/tonne in light of the falling price of crude. Prices were expected to stabilise over the coming weeks, although there may still be an increase sometime in the next few months due to the approaching peak season, the source said. P-wax buyers were expecting immediate price reductions in line with the falling value of crude oil. Sellers were adamant there could be no immediate change as the decrease in value would take time to filter through. Common uses for paraffin wax include candles, corrugated boxes, waxed papers, frozen food cartons and wrappers. ($1 = €0.67)
http://www.icis.com/Articles/2008/08/13/9148224/europe-august-p-wax-rises-80tonne-on-tightness.html
CC-MAIN-2015-14
refinedweb
238
54.97
Roger Leigh <rleigh@whinlatter.ukfsn.org> writes: > While historical reasons are acceptable for users' dotfiles, I remain to > be convinced that there is a logical rationale for them in any system > location, or even anywhere under $HOME except the root. "It's way too much of a pain to modify upstream code that uses files beginning with '.' for reasons that are essentially cosmetic." Passing along the request to upstream makes perfect sense. I would recommend using _ instead of . if they want to put the file into a clearly separate namespace. But if upstream doesn't bite, I'd say that this is the sort of divergence and maintenance burden that Debian really doesn't need. The advantages are not compelling enough, IMO. -- Russ Allbery (rra@debian.org) <>
https://lists.debian.org/debian-devel/2006/06/msg00284.html
CC-MAIN-2017-09
refinedweb
128
59.4
spacy 1.6.0 and German, as well as tokenization for Chinese, Spanish, Italian, French, Portuguese, Dutch, Swedish and Hungarian. It’s commercial open-source software, released under the MIT license. 💫 Version 1.6 and German - - CPython 2.6, 2.7, 3.3, 3.4, 3.5 (only 64 bit) - macOS / OS X - Linux - Windows (Cygwin, MinGW, Visual Studio) Install spaCy spaCy is compatible with 64-bit CPython 2.6+/3.3+ and runs on Unix/Linux, OS X and Windows. Source packages are available via pip. Please make sure that you have a working build enviroment set up. See notes on Ubuntu, macOS/OS X and Windows for details. pip When using pip it is generally recommended to install packages in a virtualenv to avoid modifying system state: pip install spacy Python packaging is awkward at the best of times, and it’s particularly tricky with C extensions, built via Cython, requiring large data files. So, please report issues as you encounter them. Install model After installation you need to download a language model. Currently only models for English and German, named en and de, are available. python -m spacy.en.download all python -m spacy.de.download all The download command fetches about 1 GB of data which it installs within the spacy package directory. Upgrading spaCy To upgrade spaCy to the latest release: pip pip install -U spacy Sometimes new releases require a new language model. Then you will have to upgrade to a new model, too. You can also force re-downloading and installing a new language model: python -m spacy.en.download --force Compile # find git install instructions at git clone cd spaCy virtualenv .env && source .env/bin/activate pip install -r requirements.txt pip install -e . Compared to regular install via pip requirements.txt additionally installs developer dependencies such as cython.> --vectors --model --slow Download model to custom location You can specify where spacy.en.download and spacy.de.download download the language model to using the --data-path or -d argument: python -m spacy.en.download all --data-path /some/dir If you choose to download to a custom location, you will need to tell spaCy where to load the model from in order to use it. You can do this either by calling spacy.util.set_data_path() before calling spacy.load(), or by passing a path argument to the spacy.en.English or spacy.de.German constructors. Changelog 2017-01-16 v1.6.0: Improvements to tokenizer and tests ✨ Major features and improvements - Updated token exception handling mechanism to allow the usage of arbitrary functions as token exception matchers. - Improve how tokenizer exceptions for English contractions and punctuations are generated. - Update language data for Hungarian and Swedish tokenization. - Update to use Thinc v6 to prepare for spaCy v2.0. 🔴 Bug fixes - Fix issue #326: Tokenizer is now more consistent and handles abbreviations correctly. - Fix issue #344: Tokenizer now handles URLs correctly. - Fix issue #483: Period after two or more uppercase letters is split off in tokenizer exceptions. - Fix issue #631: Add richcmp method to Token. - Fix issue #718: Contractions with She are now handled correctly. - Fix issue #736: Times are now tokenized with correct string values. - Fix issue #743: Token is now hashable. - Fix issue #744: were and Were are now excluded correctly from contractions. 📋 Tests - Modernise and reorganise all tests and remove model dependencies where possible. - Improve test speed to ~20s for basic tests (from previously >80s) and ~100s including models (from previously >200s). - Add fixtures for spaCy components and test utilities, e.g. to create Doc object manually. - Add documentation for tests to explain conventions and organisation. 👥 Contributors Thanks to @oroszgy, @magnusburton, @guyrosin and @danielhers for the pull requests! 2016-12-27 v1.5.0: Alpha support for Swedish and Hungarian ✨ Major features and improvements - NEW: Alpha support for Swedish tokenization. - NEW: Alpha support for Hungarian tokenization. - Update language data for Spanish tokenization. - Speed up tokenization when no data is preloaded by caching the first 10,000 vocabulary items seen. 🔴 Bug fixes - List the language_data package in the setup.py. - Fix missing vec_path declaration that was failing if add_vectors was set. - Allow Vocab to load without serializer_freqs. 📖 Documentation and examples - NEW: spaCy Jupyter notebooks repo: ongoing collection of easy-to-run spaCy examples and tutorials. - Fix issue #657: Generalise dependency parsing annotation specs beyond English. - Fix various typos and inconsistencies. 👥 Contributors Thanks to @oroszgy, @magnusburton, @jmizgajski, @aikramer2, @fnorf and @bhargavvader for the pull requests! 2016-12-18 v1.4.0: Improved language data and alpha Dutch support ✨ Major features and improvements - NEW: Alpha support for Dutch tokenization. - Reorganise and improve format for language data. - Add shared tag map, entity rules, emoticons and punctuation to language data. - Convert entity rules, morphological rules and lemmatization rules from JSON to Python. - Update language data for English, German, Spanish, French, Italian and Portuguese. 🔴 Bug fixes - Fix issue #649: Update and reorganise stop lists. - Fix issue #672: Make token.ent_iob_ return unicode. - Fix issue #674: Add missing lemmas for contracted forms of “be” to TOKENIZER_EXCEPTIONS. - Fix issue #683 Morphology class now supplies tag map value for the special space tag if it’s missing. - Fix issue #684: Ensure spacy.en.English() loads the Glove vector data if available. Previously was inconsistent with behaviour of spacy.load('en'). - Fix issue #685: Expand TOKENIZER_EXCEPTIONS with unicode apostrophe (’). - Fix issue #689: Correct typo in STOP_WORDS. - Fix issue #691: Add tokenizer exceptions for “gonna” and “Gonna”. ⚠️ Backwards incompatibilities No changes to the public, documented API, but the previously undocumented language data and model initialisation processes have been refactored and reorganised. If you were relying on the bin/init_model.py script, see the new spaCy Developer Resources repo. Code that references internals of the spacy.en or spacy.de packages should also be reviewed before updating to this version. 📖 Documentation and examples - NEW: “Adding languages” workflow. - NEW: “Part-of-speech tagging” workflow. - NEW: spaCy Developer Resources repo – scripts, tools and resources for developing spaCy. - Fix various typos and inconsistencies. 👥 Contributors Thanks to @dafnevk, @jvdzwaan, @RvanNieuwpoort, @wrvhage, @jaspb, @savvopoulos and @davedwards for the pull requests! 2016-12-03 v1.3.0: Improve API consistency ✨ API improvements - Add Span.sentiment attribute. - #658: Add Span.noun_chunks iterator (thanks @pokey). - #642: Let --data-path be specified when running download.py scripts (thanks @ExplodingCabbage). - #638: Add German stopwords (thanks @souravsingh). - #614: Fix PhraseMatcher to work with new Matcher (thanks @sadovnychyi). 🔴 Bug fixes - Fix issue #605: accept argument to Matcher now rejects matches as expected. - Fix issue #617: Vocab.load() now works with string paths, as well as Path objects. - Fix issue #639: Stop words in Language class now used as expected. - Fix issues #656, #624: Tokenizer special-case rules now support arbitrary token attributes. 📖 Documentation and examples - Add “Customizing the tokenizer” workflow. - Add “Training the tagger, parser and entity recognizer” workflow. - Add “Entity recognition” workflow. - Fix various typos and inconsistencies. 👥 Contributors Thanks to @pokey, @ExplodingCabbage, @souravsingh, @sadovnychyi, @manojsakhwar, @TiagoMRodrigues, @savkov, @pspiegelhalter, @chenb67, @kylepjohnson, @YanhaoYang, @tjrileywisc, @dechov, @wjt, @jsmootiv and @blarghmatey for the pull requests! 2016-11-04 v1.2.0: Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese ✨ Major features and improvements - NEW: Support Chinese tokenization, via Jieba. - NEW: Alpha support for French, Spanish, Italian and Portuguese tokenization. 🔴 Bug fixes - Fix issue #376: POS tags for “and/or” are now correct. - Fix issue #578: --force argument on download command now operates correctly. - Fix issue #595: Lemmatization corrected for some base forms. - Fix issue #588: Matcher now rejects empty patterns. - Fix issue #592: Added exception rule for tokenization of “Ph.D.” - Fix issue #599: Empty documents now considered tagged and parsed. - Fix issue #600: Add missing token.tag and token.tag_ setters. - Fix issue #596: Added missing unicode import when compiling regexes that led to incorrect tokenization. - Fix issue #587: Resolved bug that caused Matcher to sometimes segfault. - Fix issue #429: Ensure missing entity types are added to the entity recognizer. 2016-10-23 v1.1.0: Bug fixes and adjustments - Rename new pipeline keyword argument of spacy.load() to create_pipeline. - Rename new vectors keyword argument of spacy.load() to add_vectors. 🔴 Bug fixes - Fix issue #544: Add vocab.resize_vectors() method, to support changing to vectors of different dimensionality. - Fix issue #536: Default probability was incorrect for OOV words. - Fix issue #539: Unspecified encoding when opening some JSON files. - Fix issue #541: GloVe vectors were being loaded incorrectly. - Fix issue #522: Similarities and vector norms were calculated incorrectly. - Fix issue #461: ent_iob attribute was incorrect after setting entities via doc.ents - Fix issue #459: Deserialiser failed on empty doc - Fix issue #514: Serialization failed after adding a new entity label. 2016-10-18 v1.0.0: Support for deep learning workflows and entity-aware rule matcher ✨ Major features and improvements - NEW: custom processing pipelines, to support deep learning workflows - NEW: Rule matcher now supports entity IDs and attributes - NEW: Official/documented training APIs and GoldParse class - Download and use GloVe vectors by default - Make it easier to load and unload word vectors - Improved rule matching functionality - Move basic data into the code, rather than the json files. This makes it simpler to use the tokenizer without the models installed, and makes adding new languages much easier. - Replace file-system strings with Path objects. You can now load resources over your network, or do similar trickery, by passing any object that supports the Path protocol. ⚠️ Backwards incompatibilities - The data_dir keyword argument of Language.__init__ (and its subclasses English.__init__ and German.__init__) has been renamed to path. - Details of how the Language base-class and its sub-classes are loaded, and how defaults are accessed, have been heavily changed. If you have your own subclasses, you should review the changes. - The deprecated token.repvec name has been removed. - The .train() method of Tagger and Parser has been renamed to .update() - The previously undocumented GoldParse class has a new __init__() method. The old method has been preserved in GoldParse.from_annot_tuples(). - Previously undocumented details of the Parser class have changed. - The previously undocumented get_package and get_package_by_name helper functions have been moved into a new module, spacy.deprecated, in case you still need them while you update. 🔴 Bug fixes - Fix get_lang_class bug when GloVe vectors are used. - Fix Issue #411: doc.sents raised IndexError on empty string. - Fix Issue #455: Correct lemmatization logic - Fix Issue #371: Make Lexeme objects hashable - Fix Issue #469: Make noun_chunks detect root NPs 👥 Contributors Thanks to @daylen, @RahulKulhari, @stared, @adamhadani, @izeye and @crawfordcomeaux for the pull requests! 2016-05-10 v0.101.0: Fixed German model - Fixed bug that prevented German parses from being deprojectivised. - Bug fixes to sentence boundary detection. - Add rich comparison methods to the Lexeme class. - Add missing Doc.has_vector and Span.has_vector properties. - Add missing Span.sent property. 2016-05-05 v0.100.7: German! spaCy finally supports another language, in addition to English. We’re lucky to have Wolfgang Seeker on the team, and the new German model is just the beginning. Now that there are multiple languages, you should consider loading spaCy via the load() function. This function also makes it easier to load extra word vector data for English: import spacy en_nlp = spacy.load('en', vectors='en_glove_cc_300_1m_vectors') de_nlp = spacy.load('de') To support use of the load function, there are also two new helper functions: spacy.get_lang_class and spacy.set_lang_class. Once the German model is loaded, you can use it just like the English model: doc = nlp(u'''Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie aus freien Inhalten, zu dem du mit deinem Wissen beitragen kannst. Seit Mai 2001 sind 1.936.257 Artikel in deutscher Sprache entstanden.''') for sent in doc.sents: print(sent.root.text, sent.root.n_lefts, sent.root.n_rights) # (u'ist', 1, 2) # (u'sind', 1, 3) The German model provides tokenization, POS tagging, sentence boundary detection, syntactic dependency parsing, recognition of organisation, location and person entities, and word vector representations trained on a mix of open subtitles and Wikipedia data. It doesn’t yet provide lemmatisation or morphological analysis, and it doesn’t yet recognise numeric entities such as numbers and dates. Bugfixes - spaCy < 0.100.7 had a bug in the semantics of the Token.__str__ and Token.__unicode__ built-ins: they included a trailing space. - Improve handling of “infixed” hyphens. Previously the tokenizer struggled with multiple hyphens, such as “well-to-do”. - Improve handling of periods after mixed-case tokens - Improve lemmatization for English special-case tokens - Fix bug that allowed spaces to be treated as heads in the syntactic parse - Fix bug that led to inconsistent sentence boundaries before and after serialisation. - Fix bug from deserialising untagged documents. 2016-03-08 v0.100.6: Add support for GloVe vectors This release offers improved support for replacing the word vectors used by spaCy. To install Stanford’s GloVe vectors, trained on the Common Crawl, just run: sputnik --name spacy install en_glove_cc_300_1m_vectors To reduce memory usage and loading time, we’ve trimmed the vocabulary down to 1m entries. This release also integrates all the code necessary for German parsing. A German model will be released shortly. To assist in multi-lingual processing, we’ve added a load() function. To load the English model with the GloVe vectors: spacy.load('en', vectors='en_glove_cc_300_1m_vectors') 2016-02-07 v0.100.4: Fix OSX problem introduced in 0.100.3 Small correction to right_edge calculation 2016-02-06 v0.100.3 Support multi-threading, via the .pipe method. spaCy now releases the GIL around the parser and entity recognizer, so systems that support OpenMP should be able to do shared memory parallelism at close to full efficiency. We’ve also greatly reduced loading time, and fixed a number of bugs. 2016-01-21 v0.100.1: Fix install for OSX v0.100 included header files built on Linux that caused installation to fail on OSX. This should now be corrected. We also update the default data distribution, to include a small fix to the tokenizer. 2016-01-19 v0.100: Revise setup.py, better model downloads, bug fixes - Redo setup.py, and remove ugly headers_workaround hack. Should result in fewer install problems. - Update data downloading and installation functionality, by migrating to the Sputnik data-package manager. This will allow us to offer finer grained control of data installation in future. - Fix bug when using custom entity types in Matcher. This should work by default when using the English.__call__ method of running the pipeline. If invoking Parser.__call__ directly to do NER, you should call the Parser.add_label() method to register your entity type. - Fix head-finding rules in Span. - Fix problem that caused doc.merge() to sometimes hang - Fix problems in handling of whitespace 2015-11-08 v0.99: Improve span merging, internal refactoring - Merging multi-word tokens into one, via the doc.merge() and span.merge() methods, no longer invalidates existing Span objects. This makes it much easier to merge multiple spans, e.g. to merge all named entities, or all base noun phrases. Thanks to @andreasgrv for help on this patch. - Lots of internal refactoring, especially around the machine learning module, thinc. The thinc API has now been improved, and the spacy._ml wrapper module is no longer necessary. - The lemmatizer now lower-cases non-noun, noun-verb and non-adjective words. - A new attribute, .rank, is added to Token and Lexeme objects, giving the frequency rank of the word. 2015-11-03 v0.98: Smaller package, bug fixes - Remove binary data from PyPi package. - Delete archive after downloading data - Use updated cymem, preshed and thinc packages - Fix information loss in deserialize - Fix __str__ methods for Python2 2015-10-23 v0.97: Load the StringStore from a json list, instead of a text file - Fix bugs in download.py - Require --force to over-write the data directory in download.py - Fix bugs in Matcher and doc.merge() 2015-10-19 v0.96: Hotfix to .merge method - Fix bug that caused text to be lost after .merge - Fix bug in Matcher when matched entities overlapped 2015-10-18 v0.95: Bugfixes - Reform encoding of symbols - Fix bugs in Matcher - Fix bugs in Span - Add tokenizer rule to fix numeric range tokenization - Add specific string-length cap in Tokenizer - Fix token.conjuncts 2015-10-09 v0.94 - Fix memory error that caused crashes on 32bit platforms - Fix parse errors caused by smart quotes and em-dashes - Author: Matthew Honnibal - - Topic :: Scientific/Engineering - Package Index Owner: honnibal - DOAP record: spacy-1.6.0.xml
https://pypi.python.org/pypi/spacy
CC-MAIN-2017-09
refinedweb
2,738
52.15
Hello everyone! Welcome to the first in a series! I'm going to try to explain a common software engineering interview question to better understand it and hopefully remember it when the time comes! These problems will primarily be solved in JavaScript, as that is my language of choice when testing (and I just want to become a good front-end dev. 🥺) Q: Find the second largest number in a given array. Params: We are given an array of whole, positive integers (no negative numbers or floats). We are to write a function and return the second largest integer. Let's start! We'll write the skeleton of our function, setting the input/argument as "arr" for array: function secondLargest(arr){}; Then, we'll need to set two empty variables: largest and second. Why? We will need placeholders for both our prospective largest and second largest numbers as we loop through our array. We want to keep track of each integer that is in the array and measure the value against the others function secondLargest(arr){ let largest = ''; let second = ''; } ...Which brings us to our next step: create a for-loop! As we iterate through the array, we will measure each value against each other, comparing the variable "largest" to the current iteration value (arr[i]). function secondLargest(arr){ let largest = ''; let second = ''; // for(let i=0; i < arr.length; i++){}; // }; To compare, we will create an if-statement comparing largest to arr[i]. If arr[i] is greater than largest, then replace it with the current value of arr[i] by redeclaring largest and setting it equal to arr[i] function secondLargest(arr){ let largest = ''; let second = ''; for(let i=0; i < arr.length; i++){ // if(arr[i] > largest){ largest = arr[i] }; // }; }; We found the largest number! But how do we get the second largest? We did find it already (kind of): we'll just set the former "largest" number to the "second" variable. HOWEVER, we must declare the second variable BEFORE we declare the new largest number, simply because order matters-- JavaScript executes code from the top-down. function secondLargest(arr){ let largest = ''; let second = ''; for(let i=0; i < arr.length; i++){ if(arr[i] > largest){ // second = largest; // largest = arr[i]; }; }; }; Speaking of order and specificity, it's time we find the "true" second-largest number in the array. Let's create another if-statement with more specific parameters: If the arr[i] is GREATER THAN second AND LESS THAN largest, then set arr[i] as second function secondLargest(arr){ let largest = ''; let second = ''; for(let i=0; i < arr.length; i++){ if(arr[i] > largest){ second = largest; largest = arr[i]; }; // if(arr[i] > second && arr[i]< largest){ second = arr[i]; }; // }; }; Finally, we'll return our second variable to complete the requirement. function secondLargest(arr){ let largest = ''; let second = ''; for(let i=0; i < arr.length; i++){ if(arr[i] > largest){ second = largest; largest = arr[i]; }; if(arr[i] > second && arr[i]< largest){ second = arr[i]; }; }; // return second; // }; And there you have it! It's a fairly simple solution, a bit long since we're using a traditional for-loop, but it works! Feel free to post your own solutions in your programming language of choice in the comments! Thanks for reading! If you'd like to keep in touch, please reach out to me on Twitter! Discussion (21) This is a great way of learning, growing and sharing. Keep it up please, I'll follow along. For these kinds of things I really love the expressive power of functional programming that allows me to declaratively process the array (projecting, reducing, filtering, ...). Your solution works well, no doubt. I just like it a bit more expressive (performance issues put aside), along the lines of (pseudocode): return arr.sortDescending().skip(1).firstOrDefault(); Why not return arr.sortDescending()[1] ? Depends on the implementation. I was biased to think of LINQ (C#). If the object returned by 'sortDescending' contains less than two items, the index approach will throw an exception. Just for the sake of brevity, but not improved performance: I might be misunderstanding the docs (developer.mozilla.org/en-US/docs/W...), but it looks like there would be an issue with this implementation. When a callback function isn't passed to the sort method, all the non-undefined array elements are converted to strings and compared to another. This means that secondwould point to 45and maxwould point to 8900even though 10,000 exists in the array. Sure good point: Daaamn that’s slick. 👌🏾👌🏾👌🏾 Yeah but I bet yours is faster :) The initializers should probably be undefined since the function will return the wrong result if the array consists of fewer than 2 items or if all items in the array consist of the same value. Gotta watch out for those edge cases! Thanks for posting, Cat! This is my solution: I think the second 'if' should be an 'else if'. What if the array given is something like [1,1], meaning there is no second largest integer? Good point! Let me try that out and I'll fix it up. :) Thanks Sebastian! My solution using using the Ramda library Hi Cat! Amazing solution! There are several testcases that would not not pass. in case [0,1] -> 0 is the second largest, but function returns ' '. in case [1, 0] -> 0 again is the second largest, but the function will return 1. in case [1,1,1,0] -> same as in the previous testcase. This happens because of comparison of 0 with an empty string. ' ' === 0 // false ' ' > 0 // false ' ' < 0 // false The small change that I would do: You know that arr[0] element exists for sure if array is not empty. Sorting will allow you to start with the smallest element whatever it is and find the second largest. Small note for other solutions: You cannot just sort an array and take a second element from the end ;) in case of [0,1,2,2,2,2] code will return 2. You also need to make numbers unique. This challenge also tests how creative you can be in your testcases and the ability to think what can potentially break your code. How about checking array is valid (as an numeric array which length > 0) via: if (!Array.isArray(array) || !array.length || array.some(isNaN)) { return ("Not a valid array") } Regarding the 0 comparison, how about: let max = -Infinity, second = -Infinity Ooooh got it. Darn it, HackerRank, for giving me a false-positive. ;____; I'll refactor the code above and credit you! Thanks Lia! You da best. As a learning lesson, this example could be improved: First, there's a bug in that it doesn't handle cases like [2, 5, 10, 9, 10, 7]where the numbers aren't unique. (I double-checked, uniqueness was not part of your Params description above.) In that case, the code above will return 9, even though it's clearly not the second-largest. Second, if you fix that by using >=in the first test, the second one isn't even needed. The storing of secondin the first test takes care of the second largest, even if there's only one element in the array. So the forloop could be simplified to just: if(arr.length > 1) return arr.sort((a,b)=> {return b-a})[1] Subscribed! You may want to make it recursive for multi-dimensional arrays Well the problem was toooo simple.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/cat/interview-q-s-decoded-issue-1-40i3
CC-MAIN-2021-49
refinedweb
1,242
64.3
On Wed, Dec 22, 2010 at 03:45:39PM -0800, Dan Williams wrote:> 3.6 Constraints:> 1/ Calls to async_<operation> are not permitted in IRQ context. Other> contexts are permitted provided constraint #2 is not violated.BTW, this is misleading. Have the functions been renamed dma_async_xxx(),eg dma_async_memcpy_buf_to_buf etc, or are you referring just to: async_dmaengine_get async_dmaengine_put async_dma_find_channel async_dma_find_channel async_tx_ack async_tx_clear_ack async_tx_test_ackBeware of just renaming it to dma_async_<operation> as there's otherfunctions in that namespace which may not be appropriate.Eg, is it really illegal to issue call dma_async_issue_pending() fromIRQ context? That'd make it exceedingly difficult to use the DMAengine with the slave API in a lot of device drivers.
http://lkml.org/lkml/2010/12/22/336
CC-MAIN-2017-43
refinedweb
113
56.55
Setup Network¶ A dask.distributed network consists of one Scheduler node and several Worker nodes. One can set these up in a variety of ways Using the Command Line¶ We launch the dask-scheduler executable in one process and the dask-worker executable in several processes, possibly on different machines. Launch dask-scheduler on one node: $ dask-scheduler Start scheduler at 192.168.0.1:8786 Then launch dask-worker on the rest of the nodes, providing the address to the node that hosts dask-scheduler: $ dask-worker 192.168.0.1:8786 Start worker at: 192.168.0.2:12345 Registered with center at: 192.168.0.1:8786 $ dask-worker 192.168.0.1:8786 Start worker at: 192.168.0.3:12346 Registered with center at: 192.168.0.1:8786 $ dask-worker 192.168.0.1:8786 Start worker at: 192.168.0.4:12347 Registered with center at: 192.168.0.1:8786 There are various mechanisms to deploy these executables on a cluster, ranging from manualy SSH-ing into all of the nodes to more automated systems like SGE/SLURM/Torque or Yarn/Mesos.. Start the Scheduler, provide the listening port (defaults to 8786) and Tornado IOLoop (defaults to IOLoop.current()) from distributed import Scheduler from tornado.ioloop import IOLoop from threading import Thread loop = IOLoop.current() t = Thread(target=loop.start, daemon=True) t.start() s = Scheduler(loop=loop) s.start((processes=False) A scheduler will be available under c.scheduler and a list of workers under c.workers. There is an IOLoop running in a background thread. Using AWS¶ See Cloud Deployments for the latest information on deploying to Amazon cloud. Software Environment¶ The workers and clients should all share the same software environment. That means that they should all have access to the same libraries and that those libraries should be the same version. Dask generally assumes that it can call a function on any worker with the same outcome (unless explicitly told otherwise.) This is typically enforced through external means, such as by having a network file system (NFS) mount for libraries, by starting the dask-worker processes in equivalent Docker containers,
http://distributed.dask.org/en/latest/setup.html
CC-MAIN-2018-47
refinedweb
362
60.92
Hi Greg,Thanks for the comments:)Greg KH wrote:> > 2. Problem> > There is no problem :)> > > Recent large machines have many PCI devices and some boards that> > contain devices (e.g. CPU, memory, and/or I/O devices). A certain PCI> > device (PCI1) might be connected with other one (PCI2), which means that> > there is a dependency between PCI1 and PCI2.> > You have this today? On what platform? This is the first I have heard> of this. If needed, we can merely change the pci hotplug core to allow> a hierarchy of pci slots. Will that solve your problem?I meant that a P2P bridge (that has hotpluggable slots) and a PCI device wouldhave such a dependency. As you suggeted, if the PCI hotplug core is changedthat way, the dependency would be represented in sysfs quite well:) However,a board that contains CPU, memory and/or I/O devices still doesn't have adirectory in sysfs to represent dependencies... Actually, I'm focusing on hotplugfeatures for that kind of the boards, and making a patch that enables it. Thatpatch will be coming out soom.> > 3. Suggestion> > -------------> > To solve the problem, I'd like to propose the following idea.> > > > ["hotplug" directory]> > This directory is to represent a hierarchy of hotpluggable devices.> > Hm, no. What about usb, firewire, scsi and any other future bus that> can be "hotpluggable". The kernel doesn't treat them differently, and> we shouldn't either.> > > "hotpluggable device" means a device that can be powered off and> > removed physically from the system running. The hierarchy describes a > > dependency between each device. This directory would be placed, like:> > > > /sys/devices/hotplug> > > > Any systems that enable hotplug (e.g. ACPI, DLPAR) can create their> > own directory right under the "hotplug" directory, like:> > > > /sys/devices/hotplug/acpi> > /sys/devices/hotplug/dlpar> > > > Each of systems can create directories and files under the own directory,> > and these directories should be easy for user to use.> > > > > > [ACPI based Hotplug Case]> > I think that ACPI is one of the systems tha know dependencies of devices.> > But it doesn't know about all devices in the system (like USB, firewire> and others), so this would quickly break down. I also don't like> creating a solution that is so hard-wired for one firmware type like> ACPI. What about Open Firmware based machines? Pure BIOS machines? No> firmware at all machines? The current sysfs trees work just fine for> all of them, without users having to figure out what the access type the> kernel uses to get to the devices.That's right. /sys/devices/hotplug/ACPI/ tree becomes hard-wired one. I wasthinking to define the board by using ACPI (as a "generic container device" inACPI namespace). Therefore, if there is the new tree I proposed in the kernel,it would be easy to represent the hierarchy, and a directory for the boardappears in the new tree. So I thought that we could put an control file toinvoke the board hotplug and an information file under the directory.(Actually, I've made a rough patch for the new tree and it seems to work fine:)I also thought that interface for hotplug could be unified so that it would becomeeasier for user to use.However, it's a hard-wired way and the current sysfs trees work fine for all ofdevices as you mentioned. Now I have just one thing necessary to sysfs.That's a directory and files for the board. Should I abstract the "board" andintroduce a new directory for board under /sys/devices/system/, like NUMAnode directory? (e.g. /sys/devices/system/board/) The control file, theinformation file, and etc could be created under the directory, like/sys/devices/hotplug/board/board0/eject. If it's possible, there might be lessimpact to the kernel. I'd appreciate it if you would comment on this :)Thanks,Kei-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/4/23/47
CC-MAIN-2017-34
refinedweb
671
64.81
Opened 8 years ago Closed 8 years ago #2155 closed bug (duplicate) More deadlock issues with concurrent I/O Description The following hangs on the current ghc-STABLE snapshot, and shouldn't. module Main where import Control.Concurrent import qualified Data.ByteString.Lazy.Char8 as B import System.IO import System.Process launch :: B.ByteString -> IO B.ByteString launch i = do (hin,hout,herr,ph) <- runInteractiveProcess "cat" [] Nothing Nothing -- forkIO $ collect ph -- This doesn't seem to be relevant to the problem. forkIO $ do B.hPut hin i hClose hin B.hGetContents hout collect :: ProcessHandle -> IO () collect ph = do waitForProcess ph return () main :: IO () main = let i = B.pack "foo" in do o <- foldl (>>=) (return i) (take 5 (repeat launch)) B.putStrLn o Change History (3) comment:1 Changed 8 years ago by dfranke - Component changed from Compiler to Runtime System comment:2 Changed 8 years ago by simonmar - Component changed from Runtime System to libraries/process - difficulty set to Unknown - Milestone set to 6.8.3 - Owner set to simonmar comment:3 Changed 8 years ago by simonmar - Resolution set to duplicate - Status changed from new to closed Note: See TracTickets for help on using tickets. I strongly suspect this is the same as #1780, but leaving it open so that we can check.
https://ghc.haskell.org/trac/ghc/ticket/2155
CC-MAIN-2016-07
refinedweb
215
67.04
Following on from this previous post, here is a short tutorial on creating this and similar charts using pandas by automatically downloading data from Johns Hopkins University's CSSE GitHub repository (the "JHU/CSSE dataset"). The code described here is available on my GitHub repository First, we'll want a local copy of the data (up to the current date) so we don't have to keep downloading it from the internet. pandas makes this easy, since the CSV data is already well-formatted, with a header line: import pandas as pd # The confirmed cases by country data_url = ('' 'csse_covid_19_data/csse_covid_19_time_series' '/time_series_covid19_confirmed_global.csv') df = pd.read_csv(data_url) df.to_csv('covid-19-cases.csv') # The number of deaths by country data_url = ('' 'csse_covid_19_data/csse_covid_19_time_series' '/time_series_covid19_deaths_global.csv') df = pd.read_csv(data_url) df.to_csv('covid-19-deaths.csv') We also want a list of country populations: Wikipedia has a suitable page, but not all the country names used on this page are the same as those used in the JHU/CSSE data set, so we have some cleaning to do. First create a dictionary of mapping those from the dataset to those on the Wikipedia page, and save it in a file called country_aliases.py: # country_aliases.py """ A mapping from country names in the JHU/CSSE dataset to those used by the Wikipedia page for country populations. """ country_aliases = { 'Cabo Verde': 'Cape Verde', 'Congo (Brazzaville)': 'Congo', 'Congo (Kinshasa)': 'DR Congo', "Cote d'Ivoire": 'Ivory Coast', 'Czechia': 'Czech Republic', 'Holy See': 'Vatican City', 'Korea, South': 'South Korea', 'Taiwan*': 'Taiwan', 'US': 'United States', 'Timor-Leste': 'East Timor', 'West Bank and Gaza': 'Palestine', } Next, create a CSV file with the populations for each country in the JHU/CSSE, read in from Wikipedia. We skip non-country entries such as the Diamond Princess cruise ship. # get_country_populations.py """ Get a CSV file of country populations using the same naming conventions as the Johns Hopkins / CSSE COVID-19 dataset. """ import pandas as pd from country_aliases import country_aliases # This is the URL to the Wikipedia page for country populations we will use: url = '' # The table we're interested in is the first one read in from the webpage. df = pd.read_html(url)[0] # Rename the relevant column to something more manageable. df.rename(columns={'Country (or dependent territory)': 'Country'}, inplace=True) # Get rid of the footnote indicators, "[a]", "[b]", etc. df['Country'] = df['Country'].str.replace('\[\w\]', '') # Set the 'Country' column to be the index. df.index = df['Country'] # Our local copy of the COVID-19 cases file. LOCAL_CSV_FILE = 'covid-19-cases.csv' df2 = pd.read_csv(LOCAL_CSV_FILE) # Get the unique country names. jh_countries = df2['Country/Region'].unique() with open('country_populations.csv', 'w') as fo: print('Country, Population', file=fo) for country in jh_countries: # If a country named in the CSSE dataset isn't in our populations table # then look it up in the aliases dictionary ... if country not in df.index: try: country = country_aliases[country] except KeyError: # ... if we can't find it in the aliases, skip it. print('Skipping', country) continue # Write the country and its population to the CSV file. print('"{}", {}'.format(country, df.at[country, 'Population']), file=fo) Now, to make the plots, the file plot_cases.py is broken down below (see my GitHub repository for this article for the complete source code file). First, some imports, including the country_aliases dictionary we defined earlier: import sys import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator from country_aliases import country_aliases The MaxNLocator import is used later to ensure that our tick labels are integers. To be as flexible as possible, there are some flags determining how the code works: READ_FROM_URL: whether or not to read the latest data from the JHU/CSSE GitHub repo. If False, a local CSV file is used. MIN_CASES: the minimum number of cases to start the plot at (the x-axis of the plot is the number of days since this threshold is reached for each country, not the absolute date). MAX_DAYS: the maximum number of days after a country reaches MIN_CASES to plot data for. PLOT_TYPE: either 'confirmed cases' to plot the evolution of the number of confirmed cases, or 'deaths' to plot the number of COVID-19 deaths. The code: # If you have saved a local copy of the CSV file as LOCAL_CSV_FILE, # set READ_FROM_URL to True READ_FROM_URL = True # Start the plot on the day when the number of confirmed cases reaches MIN_CASES MIN_CASES = 100 # Plot for MAX_DAYS days after the day on which each country reaches MIN_CASES. MAX_DAYS = 40 #PLOT_TYPE = 'deaths' PLOT_TYPE = 'confirmed cases' # These are the GitHub URLs for the Johns Hopkins data in CSV format. if PLOT_TYPE == 'confirmed cases': data_loc = ('' 'master/csse_covid_19_data/csse_covid_19_time_series/' 'time_series_covid19_confirmed_global.csv') LOCAL_CSV_FILE = 'covid-19-cases.csv' elif PLOT_TYPE == 'deaths': data_loc = ('' 'master/csse_covid_19_data/csse_covid_19_time_series/' 'time_series_covid19_deaths_global.csv') LOCAL_CSV_FILE = 'covid-19-deaths.csv' # Read in the data to a pandas DataFrame. if not READ_FROM_URL: data_loc = LOCAL_CSV_FILE Next, read in the data and the country populations: df = pd.read_csv(data_loc) df.rename(columns={'Country/Region': 'Country'}, inplace=True) # Read in the populations file as a Series (squeeze=True) indexed by country. populations = pd.read_csv('country_populations.csv', index_col='Country', squeeze=True) The data are broken down by different regions (e.g. states, territories) for several countries, so groupby and sum over these countries. Also rename those countries that are known by different names in the populations DataFrame so they match up: # Group by country and sum over the different states/regions of each country. grouped = df.groupby('Country') df2 = grouped.sum() df2.rename(index=country_aliases, inplace=True) Note: this operation will fold in cases from different regions into their country's numbers (e.g. the British Overseas Territory of Bermuda gets counted as part of the United Kingdom.) There are two functions for the different kinds of plots: a bar chart of the progression of cases or deaths for a single country ( make_bar_plot) and a line chart for comparing this progression across several countries ( make_comparison_plot). For make_bar_plot, we need to extract a Series corresponding to the cases for the required country (and whilst we're about it, convert the index to a proper DatetimeIndex: # Extract the Series corresponding to the case numbers for country. c_df = df2.loc[country, df2.columns[3:]].astype(int) # Convert index to a proper datetime object c_df.index = pd.to_datetime(c_df.index) Next, discard the rows with fewer than MIN_CASES: c_df = c_df[c_df >= MIN_CASES] We should probably give up at this point if there are no data to plot: n = len(c_df) if n == 0: print('Too few data to plot: minimum number of {}s is {}' .format(PLOT_TYPE, MIN_CASES)) sys.exit(1) fig = plt.Figure() The plot is then generated with the usual Matplotlib methods:.bar(range(n), c_df_change.values) ax2.set_xticks([]) ax1.set_xlabel('Days since {} {}'.format(MIN_CASES, PLOT_TYPE)) ax1.set_ylabel(f'Number of {PLOT_TYPE}, $N$') ax2.set_ylabel('$\Delta N$') # Add a title reporting the latest number of cases available. title = '{}\n{} {} on {}'.format(country, c_df[-1], PLOT_TYPE, c_df.index[-1].strftime('%d %B %Y')) plt.suptitle(title) The make_comparison_plot function is slightly more complicated. This time, the c_df is a DataFrame instead of a Series because it may contain data for more than one country. If we're normalizing by dividing the case numbers by each country's population, then we match c_df against the index of the population Series in the division. Then multiply by 1,000,000 to get the figures per million people: # Extract the Series corresponding to the case numbers for countries. c_df = df2.loc[countries, df2.columns[3:]].astype(int) # Discard any columns with fewer than MIN_CASES. c_df = c_df[c_df >= MIN_CASES] if normalize: # Calculate confirmed case numbers per 1,000,000 population. c_df = c_df.div(populations.loc[countries], axis='index') * 1000000 At this point, the DataFrame c_df still has countries in its rows (index) and dates in the columns; countries with fewer than MIN_CASES on dates before the first country in the data to reach this threshold will have NaN values in these dates. we can take the transpose and then rearrange the DataSet into number of cases on each day after each individual country reaches MIN_CASES as follows: # Rearrange DataFrame to give countries in columns and number of days since # MIN_CASES in rows. c_df = c_df.T.apply(lambda e: pd.Series(e.dropna().values)) Finally, truncate the DataFrame after MAX_DAYS worth of data (the row indexed at MAX_DAYS-1): # Truncate the DataFrame after the maximum number of days to be considered. c_df = c_df.truncate(after=MAX_DAYS-1) The Matplotlib plot is complicated by the need to cater for both "normalized" and absolute data. In the case of the latter, we also plot the threshold lines corresponding to cases doubling every $\tau_2 = 2, 3\;\mathrm{and}\;5$ days. The formula for these lines is $n = n_0 2^{t/\tau_2}$, or in logarithmic form: $\log n = \log n_0 + \frac{t}{\tau_2}\log 2$ where $n_0$ is MIN_CASES. There is some further code required to label the lines and to ensure that the label is rotated and reliably in the centre of the line. # Plot the data. fig = plt.figure() ax = fig.add_subplot() for country, ser in c_df.iteritems(): ax.plot(range(len(ser)), np.log10(ser.values), label=country) if not normalize: # Set the tick marks and labels for the absolute data. ymin = int(np.log10(MIN_CASES)) ymax = int(np.log10(np.nanmax(c_df))) + 1 yticks = np.linspace(ymin, ymax, ymax-ymin+1, dtype=int) yticklabels = [str(10**y) for y in yticks] ax.set_yticks(yticks) ax.set_yticklabels(yticklabels) ax.set_ylim(ymin, ymax) ax.set_ylabel(f'Number of {PLOT_TYPE}') else: # Set the tick marks and labels for the per 1,000,000 population data. ax.set_ylim(np.log10(np.nanmin(c_df)), np.log10(np.nanmax(c_df))) ax.set_ylabel(f'Number of {PLOT_TYPE} per 1,000,000 population') # Label the x-axis ax.set_xlim(0, MAX_DAYS) ax.set_xlabel(f'Number of days since first {MIN_CASES} {PLOT_TYPE}') ax.set_xlabel(f'Number of days since first {MIN_CASES} {PLOT_TYPE}') def plot_threshold_lines(doubling_lifetime): """Add a line for the growth in numbers at a given doubling lifetime.""" # Find the limits of the line for the current plot region. x = np.array([0, MAX_DAYS]) y = np.log10(MIN_CASES) + x/doubling_lifetime * np.log10(2) ymin, ymax = ax.get_ylim() if y[1] > ymax: y[1] = ymax x[1] = doubling_lifetime/np.log10(2) * (y[1] - np.log10(MIN_CASES)) ax.plot(x, y, ls='--', color='#aaaaaa') # The reason this matters is that we want to label the line at its # centre, rotated appropriately. s = f'Doubling every {doubling_lifetime} days' p1 = ax.transData.transform_point((x[0], y[0])) p2 = ax.transData.transform_point((x[1], y[1])) xylabel = ((x[0]+x[1])/2, (y[0]+y[1])/2) dy = (p2[1] - p1[1]) dx = (p2[0] - p1[0]) angle = np.degrees(np.arctan2(dy, dx)) ax.annotate(s, xy=xylabel, ha='center', va='center', rotation=angle) if not normalize: # If we're plotting absolute numbers, indicate the doubling time. plot_threshold_lines(2) plot_threshold_lines(3) plot_threshold_lines(5) ax.legend() Finally, to call the functions, provide some countries: make_bar_plot('Austria') plt.show() countries = ['Italy', 'Spain', 'United Kingdom', 'United States', 'Japan', 'France', 'South Korea', 'China', 'Austria', 'Iran'] make_comparison_plot(countries, normalize=False) plt.show() Comments are pre-moderated. Please be patient and your comment will appear soon. Joseph Karpinski 1 year, 5 months ago You may want to change this or even remove the code segment, since it never seems to run correctly, and may confuse those who are trying to follow your code.Link | Reply Next, discard the rows with fewer than MIN_CASES. If the data were not cumulative but the daily values we might need to do something like the following to find the first index above this threshold and retain all the data after it: idx_start = c_df[c_df.ge(MIN_CASES)].index[0] c_df = c_df.loc[idx_start:] IndexError Traceback (most recent call last) <ipython-input-9-a1cc8aedd14e> in <module> ----> 1 idx_start = c_df[c_df.ge(MIN_CASES)].index[0] 2 c_df = c_df.loc[idx_start:] /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/core/indexes/extension.py in __getitem__(self, key) 207 208 def __getitem__(self, key): --> 209 result = self._data[key] 210 if isinstance(result, type(self._data)): 211 return type(self)(result, name=self.name) /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pandas/core/arrays/datetimelike.py in __getitem__(self, key) 512 getitem = self._data.__getitem__ 513 if is_int: --> 514 val = getitem(key) 515 if lib.is_scalar(val): 516 # i.e. self.ndim == 1 IndexError: index 0 is out of bounds for axis 0 with size 0 christian 1 year, 5 months ago Thanks – I've done that now. It was confusing.Link | Reply Bruno 1 year, 4 months ago Hi! First of all, thanks that is so much helpfulLink | Reply I have a question: What is this variable ser inside the make_comparison_plot() script? for country, ser in c_df.iteritems(): ax.plot(range(len(ser)), np.log10(ser.values), label=country) christian 1 year, 4 months ago Glad you found it interesting! The variable ser is the time series of case numbers for country. So c_df.iteritems() iterates over the DataFrame row-by-row, returning the country name from the index and the data in each column as a Series in ser.Link | Reply Hope that helps, Christian khaoula 1 year, 2 months ago countries = ['Italy', 'Spain', 'United Kingdom', 'United States',Link | Reply 'Japan', 'France', 'South Korea', 'China', 'Austria', 'Iran'] make_comparison_plot(countries, normalize=False) plt.show() function make_comparison_plot don't show the figue the window is empty christian 1 year, 2 months ago Sorry to hear that – it works for me. Do you want to send me the script you're running and I'll take a look?Link | Reply New Comment
https://scipython.com/blog/plotting-covid-19-case-growth-charts/
CC-MAIN-2021-39
refinedweb
2,291
56.86
Ngl.new_marker Adds new markers to the existing table of markers. Prototype nmark = Ngl.new_marker(wks, marker_strings, font_nums, xoffset=0., yoffset=0., aspect_ratio=1., size=1., angle=0.) Argumentswks The identifier returned from calling Ngl.open_wks.marker_strings A single string (or list of strings) to indicate what characters to pick from the font number (or list of font numbers) in the font_nums argument below. The strings will usually be a single character, but that is not a requirement. For example, you could create "Hello world" as a marker (see Example 5).font_nums An integer scalar (or list of scalars, having the same length as the list of strings in the marker_strings argument) containing the font table numbers indicating which font table to use for the characters in the marker strings.xoffset=0., yoffset=0. Optional scalar arguments that are the X and Y offsets of the marker. The units for these offsets are character widths, that is, an x offset of 1. would move the marker a character width to the right. In most cases these offsets are used for only slight adjustments. The default value of 0.0 indicates no offset.aspect_ratio=1. An optional scalar that is the aspect ratio of the marker. A value greater than 1. stretches the marker in the vertical direction, and a value less than 1. stretches it in the horizontal direction. The default of 1.0 produces a square aspect ratio. A value less than 0. will be set to the default.size=1. An optional scalar size multiplier for the default marker size. (a size less than 0. will default to 1.) A value of 2. doubles the marker size, and value of 0.5 halves the marker size. The default is 1.0. A size less than 0. will be set to the default.angle=0. An optional scalar that is the angle at which to rotate the marker counter-clockwise; it must be greater than or equal to 0. The default of 0.0 produces no rotation. Return valuenmark An integer representing a marker index into the current marker table that contains the new marker added. Description Given a workstation, a marker and font table number, and values for the X/Y offset, size, aspect ratio, and angle, this function adds the given marker X. A PyNGL script is given below that allows you to view an enlarged marker to determine if it needs to be adjusted. See Also Examples Example 1 See scatter2.py (output). Example 2 This example adds a filled square to the marker table and keeps all of its default settings: mstring = "y" fontnum = 35 xoffset = 0.0 yoffset = 0.0 ratio = 1.0 size = 1.0 angle = 0.0 new_index = Ngl.new_marker(wks, mstring, fontnum, xoffset, yoffset, ratio, size, angle)To make this same marker twice as high as it is wide, change the value for the ratio: ... ratio = 2.0 ... new_index = Ngl.new_marker(wks, mstring, fontnum, xoffset, yoffset, ratio, size, angle)Example 3 This example adds a filled triangle to the marker table and rotates it so that the tip is pointing to the right: mstring = "u" fontnum = 34 xoffset = 0.0 yoffset = 0.0 ratio = 1.0 size = 1.0 angle = 270.0 new_index = Ngl.new_marker(wks, mstring, fontnum, xoffset, yoffset, ratio, size, angle)To shift this rotated triangle so that the center location is more towards the center of the triangle, set the X offset value: xoffset = 0.37 ... new_index = Ngl.new_marker(wks, mstring, fontnum, xoffset, yoffset, ratio, size, angle)Note: the rotation is applied first, then the offsets. a new marker..." # import Ngl # # Open a workstation and change the color table. # wkres = Ngl.Resources() wkres.wkColorMap = ["white","black","YellowGreen","red","gray","SpringGreen"] wks_type = "ps" wks = Ngl.open_wks(wks_type,"marker",wkres) xyres = Ngl.Resources() xyres.nglDraw = False xyres.nglFrame = False x = [0.,1.,1.,0.,0.] y = [0.,0.,1.,1.,0.] # # Create a new marker, keeping all its default settings. # This particular marker is a filled triangle. You can change # "mstring" and "fontnum" to whatever character in a particular font # table that you want to create a marker from. # mstring = "u" fontnum = 34 xoffset = 0.0 yoffset = 0.0 ratio = 1.0 size = 1.0 angle = 0.0 marker_type = "triangle" xvert = [0.45,0.55] yvert = [0.50,0.50] xhorz = [0.50,0.50] yhorz = [0.45,0.55] # # Set up marker resources. # mkres = Ngl.Resources() mkres.gsMarkerSizeF = 0.5 mkres.gsMarkerColor = "YellowGreen" # # Set up some line resources. # lnres = Ngl.Resources() lnres.gsLineColor = "Red" lnres.gsLineThicknessF = 4.0 xyres.tiMainString = "Marker examination: " + marker_type canvas = Ngl.xy(wks,x,y,xyres) # # Create and draw the marker. # mkres.gsMarkerIndex = Ngl.new_marker(wks,mstring,fontnum, \ xoffset,yoffset,ratio,size,angle) marker = Ngl.add_polymarker(wks, canvas, [0.5], [0.5], mkres) # # Draw a little red cross showing center of marker. If you don't like # where the center is, you can go back and modify xoffset and/or # yoffset above. # line1 = Ngl.add_polyline(wks, canvas, xvert, yvert, lnres) line2 = Ngl.add_polyline(wks, canvas, xhorz, yhorz, lnres) Ngl.draw(canvas) Ngl.frame(wks) # Advance the frame. Ngl.end()Example 5 This example shows how to use several characters from a given font as a single marker. It also illustrates using all possible argument defaults for Ngl.new_marker. import Ngl wks = Ngl.open_wks("ps","marker") # # Using all possible default arguments, create a "Hello world" # marker using the Helvetica-bold font (font number 21). # new_index = Ngl.new_marker(wks, "Hello world", 21) # # Plot the new marker at several positions using Ngl.y. # res = Ngl.Resources() res.xyMarkLineMode = "Markers" # Default is to draw lines. res.xyMonoMarkLineMode = True # Default is only one marker style. res.xyMarkers = new_index # Set new marker plot = Ngl.y(wks, [0.0, 0.25, 0.5, 0.75, 1.0], res) Ngl.end()
http://www.pyngl.ucar.edu/Functions/Ngl.new_marker.shtml
crawl-001
refinedweb
968
62.44
AWS Startups Blog Demystifying the ecosystem in China China is a highly active market with 3,683 early stage investments totaling $20.2 billion in 2016. It’s hard to believe that in just 13 years, China has transitioned from a country that established its first ever VC fund (IDG venture China) to the world’s second largest breeding ground of unicorns (39, just behind the US). The country’s shining GDP numbers over the past two decades is no longer news. Now, the world is turning heads toward a new wave of successful companies: Chinese startups. This ranges from the well-known “giants” like Xiaomi (a consumer electronics company that was founded in 2010, became a unicorn in 2012, and hit a valuation of $45 billion in 2014) and Didi (a ride-sharing app that was founded in 2012, acquired Uber China in 2016, and now is valued at $34 billion), to up-and-coming “shakers” like Toutiao (an AI-powered news and information mobile app company that raised $1 billion in its last round of financing this April), and Mobike (China’s leading bike-sharing platform, which raised a $215 million Series D in early 2017). China’s growing success and competitive edge can be attributed to two things: 1) key players that foster and grow the unique ecosystem, and 2) a changing landscape of new cities and industries. In this post, we explore China’s competitive edge and the startups that it’s enabling. Key members in the ecosystem Key members in the ecosystem are the Chinese government, local VCs, and startups and entrepreneurs. The Chinese government The Chinese government plays an unsurprisingly important role in the startup ecosystem by regulating, subsidizing, and funding the entire ecosystem. Structurally influential. It might sound funny to have all the top investors like Bob Xu (Zhen Fund) and Neil Shen (Sequoia Capital) sit together in a classroom and take an exam, but it indeed happened last year. In China, all investment managers need to be licensed. This helps the government assess the investment capability of those fund managers because the government, as I discuss later in this post, is putting real money into those funds. It might seem a bit ludicrous to judge the fund and its partners by a ubiquitous exam, but some may argue that our love for examinations is almost a cultural heritage (for example, our impossible service examination system dates back to the Qing dynasty). Jokes aside, the exam does provide a quick screening mechanism for government officials who might take ages to understand what “IRR” even means. And highly supportive. The government is also very strategic about shifting the labor-driven economy to a more productive technology-driven model. Up to 20% of China’s trillion RMB National Social Security Fund is allocated to VC/PE investment, and billions of RMB has been set aside in a Government Guiding Fund (GGF) to invest in strategic areas like Industry 4.0. For example, Prosperity Investments is one of the biggest GGFs, with $10 billion AUM. Over 900 GGFs have been set up to facilitate the country’s economic restructuring. Yet still predominantly relies on the private market mechanism. Despite the outpouring of government money, the Chinese government chooses to leverage or rely on the knowledge and expertise of proven professional investors. GGFs have primarily adopted the format of fund-of-funds, letting those professional investors make decisions. Top VCs like IDG, Sequoia, and Shenzhen Capital have all raised new funds with capital from the GGF. Local VCs Local VC funds, a.k.a RMB funds, are becoming the major force with over 90% of the new funds raised in 2016 as RMB funds, compared to a mere 10% in 2005. Faster in decision making. These funds are generally a lot faster in decision making. RMB typically takes half the time (3-5 weeks) to close a deal compared to USD funds (6-10 weeks). This is mainly because most of the local LPs, primarily local high net worth individuals or family offices, demand faster return. Lots of RMB fund are set in a 5+2 model (5-year investment period and 2-year exit period), and the fund managers need to deploy the capital faster than their US peers so that they can see return (hopefully) in a shorter cycle. This flux of local funds has pushed those USD funds to focus more on later stage deals. Aggressive in valuation. RMB funds tend to offer higher valuation, normally +10-20%, when competing with USD funds on hot deals. The fund managers prefer to demonstrate their capabilities and build up a track record by setting their hands on those high profile deals. Startups and entrepreneurs Southern style vs. northern style. Startups in China bear different business culture/style based on their geographical location. Typically, the “southern style” is closer to the traditional way of doing business in China (keep a low profile until you get your product out, then focus on making the business profitable). In contrast, the “northern style” tends to be a lot higher profile and focused on selling the vision to investors right off the bat. This is because southern China traditionally boasts of lower margin manufacturing industries that reward operational excellence, while entrepreneurs from northern China are better at reading the trends. This has established a divide in the types of startups that incubate per region. For example, most successful software startups (like Didi [ride-sharing app] and Toutiao [news app]) are from Beijing, while hardware (like DJI [drones] and JMGO [micro projectors]) are from Shenzhen. A changing landscape Tier 1 cities vs. tier 2 cities. Beijing, Shanghai, and Shenzhen are considered the center stage of startups in China due to a historically larger talent pool, sufficient funding, and the open attitude of the local government. However, there has been a recent growth of tier 2 cities due to more aggressive government subsidies to flip these cities into dedicated tech hubs. For example, Hangzhou, the capital of Zhejiang province (also headquarter of Alibaba) is now focusing on e-commerce. Chengdu, the heart of southwest China, picked software and attracted many IT companies (IBM, Microsoft, and SAP) to headquarter regionally there, too. Leading tech firms provide the infrastructure necessary to support a local entrepreneurial ecosystem. New hot verticals. The growth of China’s economy in the last decade led to a “consumption upgrade” that established strong online shopping habits. Online-to-Offline (O2O) and vertical e-commerce platforms startups flourished following the success of Chinese giants Alibaba and JD.com. But after rounds of capital and multiple merger-of-mergers (like Meituan.com and Dianping.com [two general O2O companies that cover everything from food delivery to hotel booking]), VCs are in search of something new. We see increasing interest in shared economy, SaaS, and artificial intelligence. Shared economy is growing alongside the population base. Take smart bike-sharing platforms as an example. They have been around for fewer than three years, yet they have significant traction. OFO and Mobike closed over $660 million collectively in their recent rounds of funding and have aggressive plans to expand outside of China. What distinguishes these startups is that they take a familiar idea to customize it to something novel. These Chinese bikes are dockless and mobile-activated, which is a lot more user-friendly and low-friction versus the models in the US. SaaS companies (especially in the CRM area) that started in the early 2000s are starting to bear fruit. Beijing-based Xiaoshouyi is the Chinese version of Salesforce that just closed a $40 million round in January 2017 after years of market education. Similar to shared economy, the reason why local Chinese SaaS startups can take on those global tycoons, such as Salesforce, is because of localization. For example, Xiaoshouyi has embedded its CRM system in WeChat, one of the most popular social messaging platforms in China. Also, local SaaS companies are more mobile friendly versus western SaaS companies that tend to be more PC-based. The connected tech community has brought the artificial intelligence trend to the Chinese startup community. Unlike the US, though, the big Chinese AI startups are diverse horizontally and innovate across all industries. Just looking at top AI companies, we have Face++ (facial recognition technology used by Alibaba), Toutiao.com (AI-powered news app with over 700 million users), and iCarbonX (big data platform that’s creating a “Digital You” to predict illness). Due to the unique way of doing business in China, these startups are willing to tackle the harder problems and take on larger regulatory risks. They’re focused more on the growth potential. So what’s next? We’ve discussed specific trends regarding industry, regional development, and behavior in China. Historically, China has been known for its “me too” model because it has replicated successful US business models locally (obvious examples are Alibaba/Amazon, Baidu/Google, and Didi/Uber). Now, with stronger government support, ample talent supply, and extensive capital injection, we can expect a change. There’s a huge market for experimenting, and companies are quick to launch and shut. The entire world is waiting to see what the next “big bang” coming out of this black box will be. Warren is on the Startup Business Development team at AWS in China. Prior to Amazon, Warren spent 7 years working as a strategy consultant advising mainly Fortune 500 clients and 2 years at a buy-out private equity fund in China, focusing on consumer goods and services
https://aws.amazon.com/blogs/startups/demystifying-the-ecosystem-in-china/
CC-MAIN-2018-34
refinedweb
1,592
52.29
ISINF(3) BSD Programmer's Manual ISINF(3) isinf - test for infinity libc #include <math.h> int isinf(real-floating x); The isinf() macro determines whether its argument x is an infinity (posi- tive or negative). An argument represented in a format wider than its se- mantic type is converted to its semantic type first. The determination is then based on the type of the argument. It is determined whether the value of x is an infinity. Infinities are not supported. The isinf() macro returns a non-zero value if the value of x is an infin- ity. Otherwise 0 is returned. fpclassify(3), isfinite(3), isinff(3), isnan(3), isnanf(3), isnormal(3), math(3), signbit(3) IEEE Standard for Binary Floating-Point Arithmetic, Std 754-1985, ANSI. The isinf() macro conforms to ISO/IEC 9899:1999 ("ISO C99"). MirOS BSD #10-current March.
http://www.mirbsd.org/htman/i386/man3/isinf.htm
CC-MAIN-2015-40
refinedweb
145
59.9
Logging Tutorial - Part 1 2000-12-14 The Java Specialists' Newsletter [Issue 003] - Logging part 1 Author: Dr. Heinz M. Kabutz If you are reading this, and have not subscribed, please consider doing it now by going to our subscribe page. You can subscribe either via email or RSS. Welcome to the third "The Java(tm) Specialists' Newsletter", a low-volume newsletter that is aimed at Java programmers who are facing the real world every day. I hope you are enjoying reading this information and that it will help you to be more productive in your work. I want to thank Java-Guru-In-The-Making Dr. Christoph Jung for volunteering to write a piece on dynamic proxies in an upcoming newsletter. Please let me know if there are topics that you would like discussed or if you would like to share some of your own experiences through this newsletter. This newsletter is distributed free of charge to anyone interested, so please forward it to friends and colleagues who might find the information in this newsletter useful. I drew some fire from respectable quarters that some of the code I presented in the last newsletter was a little bit on the obscure side. The Thread example was meant to illustrate the concept of making an anonymous inner class and immediately putting it to use, as opposed to just using it as a parameter. The most probable place you would find code as shown in the last newsletter would be in the unit tests of my code. If you have a method that requires as a parameter a Vector, you can use the example from the previous newsletter to succintly pass it test data. Even though the examples might seem a bit obscure at first glance, I believe that once you've seen the "Anonymous Inner Class Init Block" example, you will begin to recognise it easily enough so it will lose most of its obscurity. If you are not writing unit test for your code, why not? There are two good books that contain information on unit tests, Refactoring by Martin Fowler and eXtreme Programming by Kent Beck. Unit tests have saved my butt a number of times, because I am not a very "good" programmer, in that I make mistakes all the time and end up messing up old working code in the process - does that sound familiar? (So, if you let me work on some of your code and you don't have unit tests, BEWARE!) When people ask me what my job is I tell them I am a professional bug writer, because that's what I'm best at. Unit tests only work if the whole development team is using the concept, much like CASE tools such as Rational Rose. I fought a losing battle 2 years ago when most of the team, on a project I was on, stopped using Rational Rose and I spent most of my time updating other developers' diagrams. Talking of unit tests, it is possible to use the Robot (java.awt.Robot) to write unit tests for GUI's. Click on TellMeAboutTheUIRobot for more information. While we are on the topic of things going wrong, we should always expect things to go wrong in the field and it is very important to be able to remotely figure out what went wrong and to prevent it from going wrong in future. An example of a remote fix was the Mars bug that had to be debugged and repaired remotely (not by me ;-) - the Risks digest had some interesting articles on that. With the internet being so widely accessible, we have a great challenge because our programs might fail somewhere in a remote part of this planet that might not be very pleasant for a support team to visit, for example amongst a man-eating pygmy tribe in Java (non-tm). Preferably we would ask them to email us a log of what went wrong so that we can look at the log and determine what the problem was. When you start with a new project it is very easy to give logging too little thought, with unpleasant consequences. It is also possible to give it too much thought and design such a complex logging system that the rest of the programming team cannot wait and starts writing their own logging functions (System.out.println springs to mind). In that case you are back to the first problem of too little thought, because each developer develops his method as he goes along. This newsletter is not going to tell you which way is best, you have to decide for yourself, but it will show you some tricks and gotcha's that I've seen several times. In designing a logging system, there are several questions we have to face: The combination of these factors can sway your logging system in various ways, which is why I will not say which is best. I will now present the first of a series of case studies of logging systems that one might want to write: Frank the Unix hacker has been writing a 3-tier application server that uses several third-party libraries which output logging information such as "Connected to ISAP" or "Disconnected from AISN" or "State changed to TLA". Since Frank is a hacker, he knows about JAD.EXE and is forever grateful to Pavel for making all CLASS files open-source, but he is a bit worried about rewriting the third-party code to output to his logging system because of something called "copyright law", whatever that is. The problem is that he needs to know exactly in which order things happen in respect to his own application server, so perferably the log should show all output to the console in the same order in which it appeared. Since Frank is a UNIX hacker, he remembers a little utility called "tee" that would represent a t-piece in a pipe of output so that you could send piping to two destinations. For example: echo hello world | tee output > output2 would send the string "hello world" to two output files, "output" and "output2". Frank decides to write an IO stream decorator that behaves as a TeeOutputStream. (Disclaimer: I wrote my first TeePrintStream in the middle of 1997 - I have since seen the same thing published in a magazine.) If you don't understand decorators, have a look at the IO chapter of the excellent book "Thinking in Java" by Bruce Eckel or visit a good Java course. The TeeOutputStream would look something like this: // TeeOutputStream import java.io.*; public class TeeOutputStream extends FilterOutputStream { private final OutputStream out2; public TeeOutputStream(OutputStream out1, OutputStream out2) { super(out1); this.out2 = out2; } public void write(int b) throws IOException { super.write(b); out2.write(b); } public void flush() throws IOException { super.flush(); out2.flush(); } public void close() throws IOException { super.close(); out2.close(); } } The TeeOutputStream would now be able to take two output streams and send the same information to both of them as shown in the main method. // ... the main class that starts up the application server public static void main(String[] args) { PrintStream out1 = System.out; OutputStream out2 = new BufferedOutputStream( new FileOutputStream("LOG")); TeeOutputStream newOut = new TeeOutputStream(out1, out2); System.setOut(new PrintStream(newOut, true)); out1 = System.err; newOut = new TeeOutputStream(out1, out2); System.setErr(new PrintStream(newOut, true)); // ... some more code... } My personal preferance would be to re-write those lines as follows, but you should do what is most comfortable to you: // ... the main class that starts up the application server public static void main(String[] args) { OutputStream log = new BufferedOutputStream( new FileOutputStream("LOG")); System.setOut( new PrintStream(new TeeOutputStream(System.out,log),true)); System.setErr( new PrintStream(new TeeOutputStream(System.err,log),true)); // ... some more code... } The astute among you should now be jumping up and down pointing at the glaringly obvious mistake in the above code. There is no provision for the log file growing and filling up the hard disk. Frank (correctly) assumes that his program is never going to run for a long enough period that the size of the log file would be bigger than 10 KB, but once all his bugs are removed, something should be done to prevent the big log file. The biggest log file that an advanced support man sent me was 1.1 GB in size when I used Frank's TeeOutputStream system. Anyway, the beauty with Frank's approach is that whenever he wants to print to the log he just does a "System.out.println()" or exception.printStackTrace(). Anything that goes to the console also goes to his log in the same order in which it appeared to the user. He can also switch off logging to a file by not setting the System.out PrintStream to print to the tee, or he can set his program to only print to the log. If we look back at the questions we face with the logging system, we see that we don't have a default way of showing WHEN the event occured, although that could be added by changing the write() method of the FileOutputStream to print the date/time after each newline. The size of the log file can be monitored by implementing a different FileOutputStream. The logging to file can be switched off easily. The client code is very simple - System.out.println("Changed to state TLA"); That's the end of Frank's logging system. As you have seen, there is a lot of scope for changing it, depending on your requirements, but there is no right answer that fits all scenarios. Next week I will present another scenario with different advantages... Please remember to forward this free newsletter to whoever you think may be interested. With regards Heinz This material from The Java(tm) Specialists' Newsletter by Maximum Solutions (South Africa). Please contact Maximum Solutions for more information. If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions. Ask Questions? Discuss: Maximum Solutions - Specialists in Object Orientation Post your Comment
http://www.roseindia.net/javatutorials/logging_tutorial_part1.shtml
CC-MAIN-2013-20
refinedweb
1,710
68.5
2018-12-18 09:59:55 UTC - Michael-Yu: @Michael-Yu has joined the channel ---- 2018-12-18 10:06:12 UTC - Victor Li: @David Kjerrumgaard currently, is there a Nar package we can use? we want to try to integrate our NiFi platform with Pulsar. for now, we are using kafka as a bridge between nifi and pulsar. But just for our demo. ---- 2018-12-18 10:19:19 UTC - Nicolas Ha: It looks like a kubernetes problem, but asking here just in case someone saw this before. I had a 1-node kubernetes with my app and pulsar, and I added two more nodes. It then stopped working, and I had exceptions from the client in the logs cannot find “proxy”. I checked the DNS using <> - everything looks fine. I rebuilt the three nodes from scratch, using coredns instead of kubedns (with kubespray). It still doesn’t work. Anyone has a clue what I should look at next? ---- 2018-12-18 12:16:05 UTC - Ganga Lakshmanasamy: Is there any plugin available to read files from remote server using streamlio and convert it into an object model? ---- 2018-12-18 22:17:04 UTC - Ali Ahmed: @Ganga Lakshmanasamy what exactly do you mean ? ---- 2018-12-18 23:51:51 UTC - Grant Wu: Is there a declarative way to setup namespaces? ---- 2018-12-19 03:08:04 UTC - Sijie Guo: @Ryan Samo sorry to ping this thread. a couple of pulsar users were asking if you can write a blog post about this :slightly_smiling_face: ---- 2018-12-19 03:15:06 UTC - David Kjerrumgaard: @Victor Li I will build you a 1.7 nar file and share it with you ASAP. The current code in the PR is slated for the 1.9 release of NiFi, but I should be able to back-port it to 1.7 ---- 2018-12-19 03:33:07 UTC - David Kjerrumgaard: @Victor Li Here is a 1.7.0 compatible NAR file. ---- 2018-12-19 03:34:09 UTC - David Kjerrumgaard: @Victor Li Please let me know if you have any issues with it. I don't have a 1.7 environment readily available to test it with, but it should work. I apologize in advance if you have issues with it ---- 2018-12-19 06:12:47 UTC - Victor Li: @David Kjerrumgaard seems this nar file depends on another nar file: ---- 2018-12-19 06:12:49 UTC - Victor Li: 2018-12-19 13:35:42,701 WARN [main] org.apache.nifi.nar.NarClassLoaders Unable to resolve required dependency 'nifi-pulsar-client-service-nar'. Skipping NAR '/home/nifi/nifi-1.7.1/./work/nar/extensions/nifi-pulsar-nar-1.7.0.nar-unpacked ---- 2018-12-19 06:13:15 UTC - Victor Li: there should be another nar file: nifi-pulsar-client-service ---- 2018-12-19 06:48:34 UTC - Victor Li: I try to clone your nifi-pulsar-client-service project from github. and build in my local. but seems some dependencies still missing. ----
http://mail-archives.us.apache.org/mod_mbox/pulsar-users/201812.mbox/%3C5c1a0b26.1c69fb81.58fba.958f@mx.google.com%3E
CC-MAIN-2019-04
refinedweb
501
74.9
HTML5 is quickly becoming the “gold standard” of cross-platform gaming solutions. Just about every major platform (both desktop and mobile) supports games developed using HTML5 so it makes sense from a development standpoint because your games can easily be distributed across multiple channels for maximum profit potential and exposure. If you’re unfamiliar with HTML5, it is a combination of HTML, CSS, and JavaScript. Unlike some of the other game development platforms such as GameSalad and Construct 2, LimeJS does require some programming experience (particularly in JavaScript) for success. If you are unfamiliar with using JavaScript to create games in HTML5, check out the Javascript for Beginners course. There are quite a few reasons why LimeJS is a good framework choice when developing 2-D games. First, it’s free to use and there are no extra licenses to purchase when publishing your game to a specific platform. Of course, free doesn’t always mean better, but in this case it seems that LimeJS actually offers more features than many of the third-party, premium framework options. It’s also very capable, LimeJS offers mouse and keyboard support, box 2-D and other frameworks for 2-D graphics drawing. It also supports HTML and Canvas rendering. More recently, the development team added support for WebGL to provide even more functionality. LimeJS offers support for basic drawing, images, sprites and other assets as well as object interactions through events and a host of other features that are simply not available on other frameworks (especially free ones). LimeJS is built using the Closure Library; a JavaScript library built by Google. It has been used to create many of Google’s products including Gmail and Drive. Closure is an excellent framework and provides a very efficient runtime for your games built using the LimeJS environment. Installing LimeJS Unlike many of the other game development engines available for 2-D HTML5 games, LimeJS does take a little bit more effort to install. The first step is to download the project from the LimeJS development page. Installation instructions are provided at the bottom of the download page and include installing Python 3.X (although Python 2.6+ is listed as the minimum requirements for set up). In most cases, it is easiest to run the command line option for installation which means you need to add the Python location to the Windows PATH environment variable. Creating a New Project After you have installed LimeJS successfully, you can create a new project by navigating to the LimeJS folder using the terminal and typing the following command: Bin/lime.py create game_tutorial This command creates a folder called “game_tutorial” in the LimeJS folder. That is where all of your game files will reside. Please remember that if you move this folder anywhere else, your game will not run until it has been compiled as it relies on libraries in the LimeJS folder during development. Now that your game folders have been created, you can actually begin developing your HTML5 game using LimeJS. To get started, check out HTML5 Mobile Game Development for Beginners. This course explains in detail how to create actual 2-D games using this open-source platform. Unique Features of Closure There are a couple of things that make LimeJS unique in terms of game development environments. For instance, one of Closure’s most notable features is its unique name spacing system. Unlike some other code libraries, Closure code is separated between files and each file can request other files during execution. While you are developing using Closure in LimeJS, you only have to include the base.js file. This file gives you the functions goog.provide (namespace) and goog.require (namespace). The Provide function declares that all code in the file can be requested. The Require function requests another file with the same name. Closure uses a built-in deps.js file to map namespaces to specific file paths. In other words, if you add new namespaces you need to run the following code to update the deps.js file so that your new file can be accessed: Python bin/lime.py update Although it is not required that you use the Provide function for your own projects, it is highly recommended because it provides more reliable results during compiling. As you can see, creating HTML5 games using LimeJS is slightly more advanced and relies on JavaScript extensively. It also helps if you have a basic understanding of Python. The Python for Beginners course is an excellent resource. If you are new to programming, you might consider using a different HTML5 engine such as GameSalad. This engine provides a drag-and-drop interface that eliminates the need for programming. You can learn more about developing with GameSalad in the Introduction to Mobile Games Development course. Whether you choose to use LimeJS or a more graphically oriented game development environment, HTML5 is the future of cross-platform gaming. Just about every new game development engine released uses HTML5 because it is supported by so many platforms and is a very efficient way to create 2-D games quickly and easily. The good news is that LimeJS is completely free to use and you can have it installed in less than 20 minutes meaning that it’s definitely worth trying if you are at all comfortable using JavaScript. The fact that it is based on the Closure Library (the same library that powers Gmail and Drive) means that you can expect high performance and low overhead when creating games using LimeJS. There is also a very strong development community that supports LimeJS. Most of these people will be more than happy to answer any questions you have along the way and help you create fun and engaging 2-D games for just about any platform you want.
https://blog.udemy.com/html5-game-tutorial/
CC-MAIN-2017-17
refinedweb
973
53.41
It started with an interview Last week in an interview for a Django developer job, I was asked: thing = (x**2 for x in xrange(10)) What is the type of thing? Although I was able to identify that the type is dependent on the () around the list-comprehension-like-construction, I didn’t know the exact type that thing would be. The answer is a generator. This post shows some of the functionalities of generators and how they can be used in Python control flow. Generator expressions Generators can be created with generator expressions. A generator expression is a bit like a list comprehension. List Comprehension uses square brackets []. In Python… >>> [x**2 for x in range(10)] [0, 1, 4, 9, 16, 25, 36, 49, 64, 81] A generator expression is a shortcut that allows generators to be created easily with a similar syntax - this time it’s using parentheses (). >>> (x**2 for x in range(10)) <generator object <genexpr> at 0x2fa5eb0> Generators are iterators Generators “provide a convenient way to implement the iterator protocol”. In Python, an iterator provides two key functions, __iter__ and next, so a generator itself must provide these two functions: >>> my_gen = (x**2 for x in range(10)) >>> my_gen.__iter__ <generator object <genexpr> at 0x293c3c0> __iter__ is there and returns the generator, now for next… >>> my_gen.next() 0 >>> my_gen.next() 1 Therefore next works. We can keep hitting until… >>> my_gen.next() 81 >>> my_gen.next() --------------------------------------------------------------------------- StopIteration Traceback (most recent call last) <ipython-input-19-b28d59f370d8> in <module>() ----> 1 zzz.next() StopIteration: A StopIteration is raised - so the generator does everything we’d expect it to by the iterator protocol. Building a generator with yield. This allows for loops over sets of values to be programmed, without the full list of values being calculated first. A generator can be used so that next is called before each iteration required. In this way, only the values required for each iteration need to be computed. The yield keyword - simple example Adding yield to a function allows for generators to be constructed ‘manually’. At its very simplest, a function could be written just to generate a single value. However, to show that a generator can return to its previous state when called again, let’s make one with two values. For example… def two_things(): yield 1 yield 'hi' Now we can make an instance of the generator. >>> my_things = two_things() >>> my_things <generator object two_things at 0x31d0960> And we can ask for next value. >>> my_things.next() 1 Now when we call next again, our generator continues from the state of the last yield. >>> my_things.next() 'hi' So you see how different values can be returned, one after the other. And after that second thing, the generator now raises a StopIteration, since it has no further values to return. Since a generator implements the iterator protocol, it can be used in a for statement and therefore in a list comprehension. This makes for a convenient way to check the values of a limited generator like this one. >>> [x for x in two_things()] [1, 'hi'] More complex example with yield So let’s write Fibonacci as a generator. I’m going to start with doctests to create the definition of the function, then put the code at the end. What I like about the doctests in this example is that in 3 fib is tested with next, but in 4 it’s tested using a list comprehension. def fib(last): """ 1. Creates a generator >>> type(fib(0)) <type 'generator'> 2. fib(0) just generates 0th value (1) >>> zero_fib = fib(0) >>> zero_fib.next() 1 >>> zero_fib.next() Traceback (most recent call last): ... StopIteration 3. fib(1) creates a generator that creates 0th (1) and 1st (1) values of fib seq >>> one_fib = fib(1) >>> one_fib.next() 1 >>> one_fib.next() 1 >>> one_fib.next() Traceback (most recent call last): ... StopIteration 4. fib(10) generates the first 10 fibonacci numbers >>> [x for x in fib(10)] [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89] """ result = 1 x = 0 a = 1 b = 0 while x <= last: yield result result = a + b b = a a = result x += 1 That’s all - have fun with generators!
https://jamescooke.info/python-generators-and-yield.html
CC-MAIN-2021-17
refinedweb
698
56.35
How to deploy Machine Learning/Deep Learning models to the web The full value of your deep learning models comes from enabling others to use them. Learn how to deploy your model to the web and access it as a REST API, and begin to share the power of your machine learning development with the world. If you are in the field of machine learning for some time, you must have created some machine learning or deep learning models. You must have thought about how will people use your Jupyter notebook? The answer is they won’t. People can not use your Jupyter notebooks, and you need to deploy your model either as an API or as a complete web service, or in a mobile device, Raspberry PI, etc. In this article, you will learn how to deploy your deep learning model as a REST API, and add a form to take the input from the user, and return the predictions from the model. We will use FastAPI to create it as an API and deploy it for free on Heroku. Step 1: Installations You need to install the necessary packages. 1. FastAPI + Uvicorn We will be FastAPI for API and Uvicorn server to run and host this API. $ pip install fastapi uvicorn 2. Tensorflow 2 We will be using Tensorflow 2 for this tutorial, and you can use the framework of your own choice. $ pip install tensorflow==2.0.0 3. Heroku You can install Heroku on Ubuntu directly from the terminal using the following command, $ sudo snap install --classic heroku On macOS, you can install it via, $ brew tap heroku/brew && brew install heroku For windows, you can install the compressed files from the official website here. 4. Git You also need to install git and make an account on GitHub so that we can push directly to GitHub and connect the main branch to our Heroku, so it will automatically deploy. You can use apt to install git on Debian. $ sudo apt install git-all To install it on Windows, you can download it directly from here. To install on macOS, you can install XCode command-line tools and run the following command to activate it, git --version You can also install it from the website of git on macOS. Step 2: Creating our Deep Learning Model We will create a simple deep learning model, which is related to sentiment analysis. The dataset used can be downloaded from Kaggle, which is related to GOP tweets. We will create this model, train it, and save it so that we can use the saved model in our API, and we do not have to train the model weights every time our API starts. We will create this model in the file model.py. import pandas as pd from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding, LSTM, SpatialDropout1D from sklearn.model_selection import train_test_split import re Here we have imported the important libraries, which will help us in the creation of the model and cleaning of data. I will not dive into details of the deep learning model or working of Tensorflow. For that, you can check this article on KDnuggets, and for working on the sentiment analysis model, check out this article at CNVRG. We will read the data using Pandas. data = pd.read_csv('archive/Sentiment.csv') # Keeping only the neccessary columns data = data[['text','sentiment']] We will create a function to remove unwanted characters in Tweets using Regex. def preProcess_data(text): text = text.lower() new_text = re.sub('[^a-zA-z0-9\s]','',text) new_text = re.sub('rt', '', new_text) return new_text data['text'] = data['text'].apply(preProcess_data) We will use Tensorflow’s tokenizer to tokenize our dataset, and Tensorflow’s pad_sequences to pad our sequences. max_fatures = 2000 tokenizer = Tokenizer(num_words=max_fatures, split=' ') tokenizer.fit_on_texts(data['text'].values) X = tokenizer.texts_to_sequences(data['text'].values) X = pad_sequences(X, 28) Y = pd.get_dummies(data['sentiment']).values Now we will split the dataset into training and testing portions. X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size = 0.20) It is now time to design and create the deep learning model. We will simply use an embedding layer and some LSTM layers with dropout. embed_dim = 128 lstm_out = 196 model = Sequential() model.add(Embedding(max_fatures, embed_dim,input_length = X.shape[1])) model.add(SpatialDropout1D(0.4)) model.add(LSTM(lstm_out, dropout=0.3, recurrent_dropout=0.2, return_sequences=True)) model.add(LSTM(128,recurrent_dropout=0.2)) model.add(Dense(3,activation='softmax')) model.compile(loss = 'categorical_crossentropy', optimizer='adam',metrics = ['accuracy']) We will now fit the model. batch_size = 512 model.fit(X_train, Y_train, epochs = 10, batch_size=batch_size, validation_data=(X_test, Y_test)) Now the deep learning model is trained, we will save the model so that we do not have to train every time we reload our server. Instead, we just use the trained model. Note that I have not done much hyper-parameter tuning or model improvement, as you can do it by yourself to deploy an improved model. model.save('sentiment.h5') Here we have saved our model in ‘hdf5’ format. You can learn more about model saving and loading in this article. Step 3: Creating a REST API using FAST API We will create a REST API using FAST API. We will create a new file named app.py. We will first do the important imports. import numpy as np from fastapi import FastAPI, Form import pandas as pd from starlette.responses import HTMLResponse from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences import tensorflow as tf import re Here we have imported FastAPI and Form from the fast API library, using which we will create an Input Form and endpoint for our API. We have imported HTMLResponse from starlette.response, which will help in creating an input form. We will start by creating an input form so that users can input data, i.e., a test string on which we can test the sentiment. app = FastAPI() @app.get('/predict', response_class=HTMLResponse) def take_inp(): return ''' <form method="post"> <input maxlength="28" name="text" type="text" value="Text Emotion to be tested" /> <input type="submit" />''' We have created our FastAPI app in the first line and used the get method on the /predict route, which will return an HTML response so that the user can see a real HTML page, and input the data on forms using the post method. We will use that data to predict on. You can run your app now by running the following command. uvicorn app:app --reload This will run your app on localhost. On the route, you can see the input form. Now let us define some helper functions, which we will use to preprocess this data. data = pd.read_csv('archive/Sentiment.csv') tokenizer = Tokenizer(num_words=2000, split=' ') tokenizer.fit_on_texts(data['text'].values) def preProcess_data(text): text = text.lower() new_text = re.sub('[^a-zA-z0-9\s]','',text) new_text = re.sub('rt', '', new_text) return new_text def my_pipeline(text): text_new = preProcess_data(text) X = tokenizer.texts_to_sequences(pd.Series(text_new).values) X = pad_sequences(X, maxlen=28) return X These functions are essentially doing the same work for cleaning and preprocessing data, which we have used in our model.py file. Now we will create a POST request at the "/predict" route so that the data posted using the form can be passed into our model, and we can make predictions. @app.post('/predict') def predict(text:str = Form(...)): clean_text = my_pipeline(text) #clean, and preprocess the text through pipeline loaded_model = tf.keras.models.load_model('sentiment.h5') #load the saved model predictions = loaded_model.predict(clean_text) #predict the text sentiment = int(np.argmax(predictions)) #calculate the index of max sentiment probability = max(predictions.tolist()[0]) #calulate the probability if sentiment==0: t_sentiment = 'negative' #set appropriate sentiment elif sentiment==1: t_sentiment = 'neutral' elif sentiment==2: t_sentiment='postive' return { #return the dictionary for endpoint "ACTUALL SENTENCE": text, "PREDICTED SENTIMENT": t_sentiment, "Probability": probability } Now that is quite some code. Let us break it down. We have defined a route "/predict" on a POST request, where the data from the forms will be our input. We have specified this in the function parameter as Form(…). We pass our text to the pipeline function so that it can return the cleaned and preprocessed data, which we can feed to our loaded model and get the predictions. We can get the index of highest predictions using the argmax function from numpy. We can pick the maximum probability using the max function from Python. Note that an endpoint in FastAPI has to return a dictionary or a Pydantic base model. You can now run your app via $ uvicorn app:app --reload At the "/predict" route, you can give an input to your model. On which the model will predict the sentiment, and return the results. We can also make a dummy route on the home page, i.e., “/” so that it is also working. @app.get('/') def basic_view(): return {"WELCOME": "GO TO /docs route, or /post or send post request to /predict "} You can see the complete code here: Docs route on FastAPI FastAPI has an amazing “/docs” route for every application, where you can test your API and the requests and routes it has. On our API, we have 3 routes in total: We can test all 3 by clicking on them. We will test the most important one, that is, the POST request on predict route, which performs all our calculations. Click on ‘Try it out’ to pass in the desired text to get its sentiment: Now you can check the results in the responses: A response of 200 means that the request is successful, and you will get a valid desired output. Step 4: Adding appropriate files helpful to deployment To define a Python version for your app on Heroku, you need to add a runtime.txt file in your folder. In that file, you can define your Python version. Just write in it the suitable Python version. Note that it is a sensitive file, so make sure to write it in the correct format, as specified, or else Heroku will throw some errors. python-3.6.13 To run the uvicorn server on Heroku, you need to add a Procfile. Note that this file has no extension. Just create a file named “Procfile“. Add the following command in Procfile. web: uvicorn app:app --host=0.0.0.0 --port=${PORT:-5000} Note that you need to run the server on 0.0.0.0, and the port should be 5000 on Heroku. Another important file is requirments.txt file. Add all the important libraries that your project needs. sklearn fastapi pandas pydantic tensorflow==2.0.0 uvicorn h5py==2.10.0 python-multipart You can add a .gitignore file to ignore the files which you will not use: __pycache__ model.py Step 5: Deploying on Github The next step is to deploy this web app on Github. You need to create a new repository on GitHub. Then open the command line and change the directory to the project directory. You need to initialize the repository: $ git init Then add all the files: $ git add -A Commit all the files: $ git commit -m "first commit" Change the branch to main: $ git branch -M main Connect the folder to the repository on GitHub: $ git remote add origin Push the repository: $ git push -u origin main Step 6: Deploying on Heroku You need to create a new app on the Heroku dashboard. Choose an appropriate name for your app. In the deploy section, in the deployment method, choose GitHub. Search your repo here, and connect to it. You can choose automatic deploys so that every change in the deployment branch on GitHub will be automatically deployed to the app. For the first time, you need to manually deploy the app. Then every time you update your deployment branch on GitHub, it will be automatically be deployed. By clicking on Deploy Branch, it will start the deployment process, and you can see the logs by clicking on “More”, which can help you see the logs of applications, and you can see any error if you face. Once the build is successful, you can check your app by clicking on Open app. You can go to all the routes you have defined earlier in your app, and test them. Seeing Deployment history You can check the deployment history of your app on GitHub by checking the environment tab on the bottom left. It will also show you all the history of deployment. Accessing your API using Python Requests You can access your API, which means that you can use this API on your normal code to perform sentiment analysis tasks. import requests #install using pip if not already url = ' data = {'text':'Testing Sentiments'} #test is the function params resp = requests.post(url, data=data) #post request print(resp.content) And you will receive the output just like you were seeing the output in the endpoint. Accessing your API using Curl Curl is a command-line tool (you can download it from here) used to make requests from the command line. We can send the request using the following command. $ curl -X 'POST' \ ' \ -H 'accept: application/json' \ -H 'Content-Type: application/x-www-form-urlencoded' \ -d 'text=WORST%20SHOW%20EVER Here we have mentioned the type of request after -X argument, i.e., POST request. Then -H shows the headers our API is using, which are application/JSON and content type. Then we have to give data using the -d argument and pass in the text. To add space, use %20. You can check the complete code at my GitHub repository here. Learning Outcome In this article, you learned how to deploy your machine learning/deep learning model on the web as a REST API using Heroku and GitHub. You also learned how to access that API using Python requests module and using CURL. Related:
https://www.kdnuggets.com/2021/04/deploy-machine-learning-models-to-web.html
CC-MAIN-2022-21
refinedweb
2,342
65.22
Well, I'll see if i can put something basic together, though i can't make any promises as to when :-). As for the other things, I think the plugin model you are using is great! As I understand it it's very close to servlet filters, correct? I've built a very similar system before and it has proven to be most effective. As for stored procedures and triggers, I think they are interesting, but unnessisary in a directory product with a servlet-filter like system. because the plugins allways stand in the way of information being written/changed and you can allways "trigger" an event off of that. As for views, thats what virtual directory is all about :-). The big issues that are barriers to apacheds becoming a virtual directory are namespace overlap and a lack of an ldap proxy. (DB->LDAP is nice too, but one step at a time). What I think would be REALLY interesting (at least from a developer's perspective) is a way to simplify the interaction with the directory from inside of a plugin. It's very common for IT departments to invest in specialized libraries for simplifying LDAP access. There are also more then a couple of SQL<->LDAP translation systems, in both Java and ADO. It's not something i've put a lot of thought into from inside of a directory, but i would definitly think it would help decrease the barier of entry for apacheds. marc --- Alex Karasulu <aok123@bellsouth.net> wrote: > Marc Boorshtein wrote: > > >--- "Noel J. Bergman" <noel@devtech.com> wrote: > > > > > >>>Replicas and slaves are designed for single > >>> > >>> > >>directory > >> > >> > >>>environments. A proxy is a step closer to > >>>virtualization in a multi directory environment > >>> > >>> > >>(as > >> > >> > >>>opposed to some kind of synchronization system). > >>> > >>> > >>Would you want to federate at the proxy, or just > >>have it proxy for multiple > >>domains independently? > >> > >> > >> > > > >Yes :-). I've seen both instances where you > fedterate > >multiple directories and when you manage seperate > >domains without performing any namespace > translation. > > > >it's a building block to virtualization and is one > of > >the quickest use cases. > > > > > Marc, we don't have one yet but as you might imagine > whiping one > together would be trivial. If you're interested in > groking this let me > know I can lend a hand as you navigate the code > base? > > Also I'm thinking of creating what I call LDAP views > and stored procs > which would also help as a building block of > virtualization if you see > what I am referring to. There are several building > blocks that we need > in our palette to make it really easy for building a > virtual directory. > The proxy backend is low hanging fruit for now. > > Alex > > >
http://mail-archives.apache.org/mod_mbox/directory-dev/200504.mbox/%3C20050425160411.61474.qmail@web30706.mail.mud.yahoo.com%3E
CC-MAIN-2016-36
refinedweb
450
62.48
August 28, 2018 | Written by: Ram Vennam Categorized: Compute Services | How-tos | Open Source | The New Builders Share this post: Introducing the Jenkins X open source project. In this blog, I’ll show you the installation steps to get Jenkins X running on IBM Cloud Kubernetes Service. Installing Jenkins X on IBM Cloud Kubernetes Service - Install Jenkins-X and Helm. - Use the Jenkins X CLI to create a new cluster on IBM Cloud Kubernetes Service. This command invokes the ibmcloudcli to log in, create a cluster, install, required local dependencies, and provision the Jenkins X platform. I recommend choosing a Standard cluster with at least 4 Core X 16GB with 3 nodes for optimal performance. Note: Choose Kubernetes Version 1.10. jx create cluster iks Stand up and stretch your legs for a few minutes. The installer creates several deployments, persistent volumes, services, and ingress. After your break to stretch, set your KUBECONFIG to your new cluster and run kubectl get deployments,services,pvc,pv,ingress -n jx to see everything that got installed. Note that there are additional charges that are incurred due to the persistent volumes. After installation, run jx console to launch the console. Verify that you can log into Jenkins using the provided URL and admin password. Configure Jenkins X to use IBM Cloud Container Registry If you used the jx create cluster iks method described above, the cluster and Jenkins X installation are already configured to use the IBM Cloud Container Registry to store your container images. All done with the setup! Now for the fun part Back in the terminal, use jx create quickstart to create a project. This will create a new git repository in your GitHub org with the starter source code, a Jenkins pipeline, and a github webhook to automatically pick up any code changes. On code commits, the build checks out the source code from GitHub, builds the app, creates a Docker image, stores the image in the container registry, and then uses the Helm chart in the repo to deploy the application to your cluster. Note: Due to a current limitation, your IBM Cloud Container registry namespace ( ibmcloud cr namespaces) has to match your github org name. If they do not match, create a new namespace using ibmcloud cr namespace-add <your-github-org>. Make code changes in your git repository and watch for a new pipeline and an updated application. This gives you a great starting point and is ready for you to modify it to match your requirements. I hope this post gave you a quick overview of what Jenkins X is and how you can use it with IBM Cloud Kubernetes Service. To learn more, I recommend watching the presentation from KubeCon 2018. If you have trouble For questions regarding the IBM Cloud Kubernetes service, engage our team via Slack by registering here. Join the discussion in the #questions channel. Seamless Integration: Istio and External Services By defining our own MCP server, we allow users to move to the Istio service mesh without any code and deployment model changes. This means we can easily use Istio to control, observe, connect, and secure services running outside Kubernetes clusters.
https://www.ibm.com/blogs/cloud-archive/2018/08/installing-jenkins-x-on-ibm-cloud-kubernetes-service/
CC-MAIN-2021-49
refinedweb
531
61.06
should be: if(front>=data.size()) if ((front >= data.size()) || (data.size() == 0)) Is there any reason why you're not simply returning the first element in the Vector, then deleting it? eg. public int getFront() { int result = -1; if (data.size() > 0) { result = ((Integer)data.get(0)).int data.remove(0); } else { throw new NoSuchElementException("Qu } return (result); } With this method, you don't need to store (or manage) the "front" attribute. Be seen. Boost your question’s priority for more expert views and faster solutions Second test is redundant. > Is there any reason why you're not simply returning the first element in the Vector, > then deleting it? this was discussed in a previous q. Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Ah. >> Second test is redundant. If data.size() == 0, then the data.get(front) will throw an exception. Specifically, the ArrayIndexOutOfBounds exception! (Or was this discussed in the unreferenced question too ;-) ?) then the first test will be true.
https://www.experts-exchange.com/questions/20816888/array-out-of-bound-problem.html
CC-MAIN-2018-30
refinedweb
192
70.9
I'm using Beautiful Soup 4 to parse a news site for links contained in the body text. I was able to find all the paragraphs that contained the links but the paragraph.get('href') none from bs4 import BeautifulSoup import urllib.request import re soup = BeautifulSoup("", "html.parser") for paragraph in soup.find_all("div", class_="zn-body__paragraph"): print(paragraph.get('href')) Do you really want this? for paragraph in soup.find_all("div", class_="zn-body__paragraph"): for a in paragraph("a"): print(a.get('href')) Note that paragraph.get('href') tries to find attribute href in <div> tag you found. As there's no such attribute, it returns None. Most probably you actually have to find all tags <a> which a descendants of your <div> (this can be done with paragraph("a") which is a shortcut for paragraph.find_all("a") and then for every element <a> look at their href attribute.
https://codedump.io/share/XtCu1ZdnsZ7l/1/how-to-find-links-within-a-specified-class-with-beautiful-soup
CC-MAIN-2017-34
refinedweb
151
70.9
High-level Languages and Speed 777 777 Posted by ScuttleMonkey from the ever-changing-animal dept. from the ever-changing-animal dept.." Old debate (Score:5, Informative) Well, we ran our own tests. We took a sizable chunk of supposedly well-written time-critical code that the gang had produced in what was later to become Microsoft C [2] and rewrote the same modules in Logitech Modula-2. The upshot was that the M2 code was measurably faster, smaller, and on examination better optimized. Apparently the C compiler was handicapped by essentially having to figure out what the programmer meant with a long string of low-level expressions. Extrapolations to today are left to the reader. [1] I used to comment that C is not a high-level language, which would induce elevated blood pressure in C programmers. After working them up, I'd bet beer money on it -- and then trot out K&R, which contains the exact quote, "C is not a high-level language." [2] MS originally relabled another company's C complier under license (I forget their name; they were an early object lesson.) Re:Slashdot (Score:2, Informative) Re:Old debate (Score:1, Informative) Lattice From a gray fox.... Re:It's very simple (Score:5, Informative) I don't believe this as much as the people who I see repeating that sentence all the time... Not many years ago (with gcc), I got an 80% speed improvement just by rewriting a medium sized function to assembly. Granted, it was a function which was in itself, half C code, half inline assembly, which might hinder gcc a bit. But it's also important to note that if the function had been written in pure C code, the compiler wouldn't have generated better code anyway since it wouldn't use MMX opcodes... Last I checked, MMX code is only generated from pure C in modern compilers when it's quite obvious that it can be used, such as in short loops doing simple arithmetic operations. An expert assembly programmer in a CPU which he knows well can still do much better than a compiler. Re:Old debate (Score:3, Informative) Uh, Java and C# are strongly typed and structured languages. Yes, but is it worth it? (Score:2, Informative). Single Page Version of the Article (Score:3, Informative) high level vs. low level 101 (Score:1, Informative) Re:Bah (Score:3, Informative) It didn;t say much at all otherwise, but it did have a nice collection of adverts. Optimisation: You don't have to hack around, some compilers do it for you. The new MS compiler does a 'whole program optimisation' where it will link things together from separate object modules. Still cannot handle libraries, but then, that's:Old debate (Score:3, Informative) Actually the quote from my copy of K&R, on my desk beside me is, C is not a "very high level" language... emphasis is mine. They put the D in DUH (Score:2, Informative) I see no reason why a high-level application couldn't be compiled as skillfully as a feature-equivalent low-level application. It's just a matter of breaking down the code into manageable building blocks. Re:It goes both ways (Score:5, Informative) While the VAX had some complex instructions (such as double-linked queue handling), it did not have a quicksort instruction. Here [hp.com] is the instruction set manual. Re:Article is theory not practice - no measurement (Score:4, Informative) No, what they say is "the proof of the pudding is in the eating." (Just pointing it out because most people get it wrong.) Re:High-level languages have an advantage (Score:3, Informative) This sound perfectly reasonable in theory. In practice, however, it's not. Users want speedy development AND speedy execution. I developed a Java image management program for crime scene photos, and the Sheriff Patrol's commander told me flat out: we'll never use this. It's too slow. I rewrote the program using C++ and Qt, and gained a massive. Re:Old debate (Score:3, Informative) badly researched blather (Score:1, Informative) 1. Unsupported implication that 'C' was created in response to PDP-11 assembly language. 2. Using vector attached processors as evidence of HLL obsolescense. First, the Altivec/MMC unit is not the entire processor, it doesn't even do most of the work, it's an *attached* unit. There is still a main MPU to do the spaghetti code. Second, they are easily used by HLL's via optimized LIBRARIES, that's the beauty and breakthrough of 'C' that has become a model for HLL's. 3. JIT examples fail to include the runtime of the JIT compiler itself. The program may speed up by 10%, but running the JIT before the program will blow that time out of the water. 4. Article totally ignores the "RISC revolution" of the 80's where processors were actually designed based on HLL's, designed specifically to speed them up, acting in consort with the compilers & linkers. This concept is now old hat. Maybe the author wasn't born yet. Need I continue??:Old debate (Score:3, Informative) instructions. Optimisation can be performed in the high level language, and even the assembler, but most is performed at the intermediate level as this way all frontends can potentially benefit. Re:Along those lines... (Score:3, Informative) Re:Old debate :It goes both ways (Score:3, Informative) As someone else mentioned, there is no quicksort instruction. That's far too complex and involves looping and conditional branching. Probably the most complex of vax instructions was the polyf/polyg instruction, which would compute a polynomial to 7 iterations thus allowing one instruction to compute a trigonometric function. There were also instructions for copying strings up to 64k (and those instructions were interruptable), and instructions to format numbers a la cobol pics. These instructions were generally emulated in the smaller microvaxen and such, but were in microcode on the larger ones. Note that even x86 has a string copy instruction. Now, here's where you're really wrong. Those instructions weren't put in there as a convenience to humans writing in assembly. Instead, they were put in there as a convenience to compiler writers who could make use of the high-level assembly instructions to ease their code generation. The cobol compiler was almost unnecessary. They had numeric data types to cover it, it was nuts. They also had instructions to deal with octawords (128 bit integers), and of course the vax allowed accesses of any size integer on any boundary, which could mean a couple of fetches for a particular piece of data. There are assembly instructions to force alignment. The only non-magic of which I'm aware is that it was "required" that between writing a piece of code into memory and executing it there should be an intervening rei instruction, apparently to clear all caching. I put the word "required" in quotes for a reason. A professor at a college that I attended wrote a very popular Scheme compiler. I mentioned one day to a grad-student friend this requirement, and somehow we ended up getting to the prof. He didn't have that in his compiler and it worked just fine writing to a piece of memory then executing it. I showed him the page in the VAX Architecture Handbook (probably around 276 or 278) and we got a good chuckle. Anyway, shortly after VAX came out people started to seriously think about simplifying the instruction set and putting more burden on the compilers. I still believe the Alpha is probably the king of risc, ironic given that VAX is the king of cisc. Most of the lessons that VAX taught us were in the negative. Re:Old debate (Score:2, Informative) Re:Old debate (Score:3, Informative) Re:3, Informative) Yep C is very weakly typed (some could say that it's untyped, as is ASM) as only the compiler does some sanity check, and even then it doesn't work too hard at it.,000 for "I could care less". (0.28 seconds) Results 1 - 10 of about 776,000 for "I couldn't care less". (0.22 seconds) But "the proof is in the pudding" kind of rolls off the tongue better... like a pudding which tastes nasty and you are therefore gently, but suavely, spitting out.. Again, this is a property of implementations, not of languages. The highest-performance Common Lisp implementations have scaffolding written in C and assembly, but they do not use a C compiler when they compile Lisp code. They often use non-C ABI conventions for argument passing and stack handling, to make their style of function calling faster. I don't mean to be harsh, but the "Lisp is slow because it's interpreted" meme is about twenty years out of date. It tends to be spread primarliy by college professors whose last exposure to Lisp was pre-1980, and it really grates on those of us who know better. Re:Quoted often, but still wrong (Score:1, Informative) I agree. Why should we give any weight to the sayings of some random guy. What the hell would he know know about computer science? [utexas.edu] The quote is rubbish and contains no usefull information whatsoever. On the contrary: the conclusion it draws in abolutely false. It seems to me that you are good example of the type of person that the OP was complaining about (ie. not knowing much about computer science). If you read about the history of computer science you would see that it started as a pure mathematical discipline that just happened to use computing devices because the algorithms were too complex to be solved quickly by hand. The early computer was just a tool that made things easier for mathematicians, much like a telescope for astronomers. Of course, modern computer science focuses much more on algorithms specifically related to computer functions like disk caching, task scheduling, etc. So Dijkstra's comment may not be as relevant today but at the time he said it was pretty accurate. together a decent WS interface. Someone who does all of those. WhyTF has the S/W industry become so compartmentalized? I can hire a device driver person, but he won't know anything about web services. I can hire a DB person, but she won't know a damn thing about poking values into registers. I can hire a web-services person, but he will have never worked on a Unix platform before. WTF? Really, WTF? In short, I can't hire someone who can take ownership of an entire system. It's always, "Well, that's a hardware thing, go ask Foo", "Oh, it looks like the database, need to talk to Bar", "The Web interface is borked, we'll need to bring Baz in", "Hm, it doesn't do this when we run it on Windows" (this one always pisses me off, because they can never explain why, and that's because they know nothing about Unix). How come I can't hire someone who could understand a whole vertical stack (and maintain it, and provide analysis and fixes when something breaks)? I do this kind of thing now. If I can do it, it can't be that hard. But everybody thinks they have to specialize. THIS IS WHAT'S WRONG WITH THE INDUSTRY. tried to compile something. In that respect it's better to think they never had released this abomination. Re:Old debate (Score when someone who understands what is going on at the machine level writes C code, they can make quite fast results as compared to someone who has learned C syntax, but doesn't have a sense of what is happening with stacks, LEAs, how a particular problem may map to float, fixed or integer approaches on top of a particular processor or chip set. C++ approaches appear overrated to me. If I want objects, I make them. If I want a *really* high level approach, I use Python. Basically, give me C or give me Python. The Free Lunch is Over (Score:1, Informative) that C++ offers both very low and very high-level semantics. When properly used, this yields high level programs with excellent performance. But, so what? Neither C++ today, or any other very widely-used programming language adequately manages the real problem, which is concurrency. Herb Sutter has written an excellent paper on this topic, called "The Free Lunch is Over". Let's get off this hobby horse and on to some real (and interesting) problems! Here is Mr. Sutter's article: Re:Old debate (Score:5, Informative) Re:Quoted often, but still wrong (Score:3, Informative) Not. Even. Wrong. If astronomy was called "telescope science" you'd also forget that it was about ways of looking at the skies. Computers are more flexible that that - they are used to model and study all kinds of natural phenomena. Algorithyms are strictly speaking mathematics, which is a feature of the universe and not "man made" if anything ever was. Computers are used to store and manipulate data about all kinds of things, most of which are not about computers. learning how to do all that is computer science. C evolved from BCPL and PL/I (Score:1, Informative) By contrast, PL/I had a much more complete type system, although it was not even close to "strongly typed". PASCAL was still very very new when C was designed. In particular, PL/I strings and arrays were first class data types with compiler-known lengths, and buffer overflows were MUCH MUCH less common. (not impossible - just much less common). Full PL/I was an enormous language and hard to compile, but the ANSI G subset was actually quite reasonable and not hard to compile for. The DEC PL/I (ANSI G subset) and C compilers for the VAX used the same code generator back-end (written by Dave Cutler who also designed RSX-11/M, VMS, and Windows NT), but the PL/I compiler produced better code for string and array handling, precisely because the compiler knew more about what the programmer actually intended. It could take better advantage of the VAX instruction set, particularly for strings of maximum known length. String instructions, such as on the VAX or the IBM System/360 could easily handle PL/I strings, but null-terminated C strings were much harder to compile for. This is not surprising, since IBM designed PL/I as a language for the System/360. Re:C and Smalltalk is what happened. (Score:2, Informative) Re:Old debate (Score:3, Informative) While I agree with your core point, I have to take exception to the implication that C++ is at all a stricter C (even if it's also more). C++ and C are different languages, and C is not a subset of C++. There are valid C programs that are invalid in C++ (even not using things like variables named "new", etc), and features like implicit void casting that C++ lacks. There are programs that are valid C and valid C++ but behave differently. And that's without getting into features of modern C (variable size arrays, language built-in complex numbers, restricted pointers, etc) that are not in C++ as far as I know. But as far as your main point, yes, the reason to use C++ is if you want/need C++ features. My original objection was to the suggestion that you just "write C but use a C++ compiler to add namespaces and nothing else". Many of the drawbacks of C++ compared to C are pretty minor, and may be worth the tradeoff if you're going to take advantage of a lot of language features. Writing "C in C++" is just silly, though. Re:Old debate (Score:3, Informative)
http://slashdot.org/story/06/07/18/0146216/high-level-languages-and-speed/informative-comments
CC-MAIN-2015-27
refinedweb
2,670
62.78
: Debugging the cause of an unexpected reset can be tricky, as the debug link can be affected and some systems will not have an easy way to get the reset cause information out of the MCU for analysis. The following idea may be useful to easily visualize the RSTSRC status whenever you come out of reset, as long as you have 7 GPIO pins to indicate the status of the RSTSRC register bits. If you do not, then you can use as many as you have and select and/or change the bits you feel most important to view. I have set up the following code to use GPIO port P2, but you can modify this to use whatever pins you have available. In this scenario, you would run the following code along with your initialization early in your main() function (i.e. before the main loop) to check the reset source and visualize this by monitoring the pin that corresponds to each reset source in the RSTSRC register:.................................... /* Ensure that the crossbar is enabled * Set up 7 GPIO to indicate status of RSTSRC bits */ P2MDOUT |= 0x7F; // set P2.0 - P2.6 to push-pull P2 &= ~0x7F; // clear P2.0 - P2.6 /* check RSTSRC status and indicate with GPIO pins */ rst_src=RSTSRC & 0x7F; // get RSTSRC value P2 = rst_src; // set P2 indicator pins...................................... I am using a C8051F930-GDI in die package and are connecting the die to the pcb by using wire bonding and glue. Is is useful to use conductive glue in order to connect the body of the die to the ground? The back of the die on Silicon Labs C8051F930-GDI devices is simply a silicon substrate and does not need to be attached to ground. The MCU connections to ground will be via the bond wires from the die to the PCB. Our suggestion is not to use an electrically conductive adhesive but to use a thermally conductive adhesive for heat dissipation. The manufacturing process is also easier using a non-conductive adhesive. In general, your contract manufacture should know the best practice for bare die mounting and you should double check with them. Can Comparator0 and Comparator1 have the same pins selected for use on the C8051F41x? While the hardware configurator tool will not let you configure it this way, it is possible to have both comparators select the same pins. You will have to make this configuration outside of the configurator by modifying the initialization code directly. Where can I find specs on the RST pin for EFM8 parts? The EFM8 datasheets have a table called Reset and Supply Monitor that give reset specs such as RST Low Time to Generate Reset. For electrical specs, the RST pin has the same electrical specifications as the other GPIO on the EFM8 devices. You can refer to the Port I/O table in the electrical specifications chapter of the desired device's datasheet for more information such as Input Leakage. Currently, in the EFM8 family, the EFM8BB3 EFM8BB2 EFM8LB1 and EFM8UB1 has I2C Slave module. And EFM8BB3 has the same I2C module as EFM8LB1, and EFM8BB2 has the same one as EFM8UB1. However, there is little difference between EFM8BB3 and EFM8BB2 (also EFM8LB1 and EFM8UB1) as below. In EFM8BB3 and EFM8LB1, the Slave Address Mask (SLVM in the I2C0ADM register) can be used to define an address mask to enable automatic hardware response to multiple slave addresses. The I2C0SLAD field in the I2C0SLAD register, combined with the SLVM mask in I2C0ADM, defines the I2C0 Slave Address for automatic hardware acknowledgement. However, in EFM8BB2 and EFM8UB1 there is no Slave Address Mask field. The I2C0 Slave Address for automatic hardware acknowledgement just be decided by I2C0SLAD in the I2C0SLAD register. When the received I2C address matches this field, hardware sets the I2C0INT bit in the I2C0STAT register. For the EFM8BB3 and EFM8LB1, if the ADDRCHK bit is set in the I2C0CN0 register, the matching address will be placed in the receive FIFO, firmware can check the address after reading it from the receive FIFO using I2C0DIN register. But this function do not exist in in EFM8BB2 and EFM8UB1. In EFM8BB3 and EFM8LB1, the FACS bit in I2C0ADM controls whether clock stretching is enforced (via setting of I2C0INT bit) after a matching slave address has been acknowledged. When this bit is set, clock stretching always occurs after an ACK of the address byte until firmware clears the I2C0INT bit. When this bit is cleared, the I2C0INT bit won't be set by the address ACK alone, but it may be set due to other conditions as detailed in the descriptions of the RD and WR bits. However, in EFM8BB2 and EFM8UB1, there is no register for the Force Address Clock Stretching control, and the clock stretching always occurs after an ACK of the address byte until firmware clears the I2C0INT bit. Slave Address Mask(SLVM) Address Check(ADDRCHK) Force Address Clock Stretching (FACS) EFM8BB3/EFM8LB1 Yes EFM8BB2/EFM8UB1 No. For the release of Silicon Labs' EFM8 product line, a factory bootloader was created. However, this bootloader only works natively on devices that are 'bootloader enabled'. These 'bootloader enabled' devices are: The AN945 bootloader will also be compatible on any future EFM8 devices (including newer revisions of the above devices). Bootloader enabled devices have the ability to check to see if they should jump to the bootloader or the application after a reset, before code starts running. Other Silicon Labs 8-bit devices, or older EFM8 devices, don't have this capability. However, you can change this by mimicking this function in the bootloader and application firmware. Firstly, the bootloader image should contain a jump to the bootloader at the reset vector 0x0000. You can do this by adding the following code to the boot_startup.asm file in the bootloader project: ?BL_JUMP SEGMENT CODE AT 0 RSEG ?BL_JUMP LJMP boot_start So, if there is no application, this will jump automatically to the bootloader. Now, you need to modify your application code to mimic the bootloader capable parts' additional functionality. I've done this in the SILABS_STARTUP.A51 file of an application, since this is where you can insert code that effectively runs before your application. In here, we'll simply need to check to see if the bootloader exists. If it does, we'll jump there first and it will perform the other checks to see if we should go to the application. However, in this case, we'll also need to determine whether we just came from the bootloader, since we technically are the application. Without this, we would get stuck in a loop - jumping from the application to the bootloader, back to the application, back to the bootloader, etc. I used R7 to store a particular value to say that we've just come from the bootloader. Here is the modified SILAB_STARTUP.A51 file: CSEG AT 0 ?C_STARTUP: LJMP BootloaderCheck RSEG ?C_C51STARTUP #include "efm8_device.h" #define BL_SIGNATURE 0xA5 BootloaderCheck: ; Read and test R7 to see if we've already entered the bootloader ; since the last reset. If so, we should skip to the application mov A, #BL_SIGNATURE xrl A, R7 jz GotoApplication ; Read and test the boot vector enable byte (byte before Lock Byte) ; The signature is present if A is 0 (leave result in A) mov DPTR, #(BL_FLASH0_LIMIT - 2) movc A, @A+DPTR xrl A, #BL_SIGNATURE ; Restore the DPTR mov DPTR, #0000h ; If the signature is present, jump to the boot vector jz GotoBootVector GotoApplication: clr A ; Restore A mov R7, #0x00 ; Restore R7 jmp STARTUP1 ; Jump to reset vector (use this to save a byte) GotoBootVector: ; A = 0, no need to restore mov R7, #BL_SIGNATURE ; Write 0xA5 to R7 to indicate we've bootloaded ljmp BL_START_ADDRESS ; Jump to boot vector The full SILABS_STARTUP.A51 file is zipped and attached to this forum post. To summarize: This. Silicon Labs EFM8 family of 8-bit MCUs is now supported by IAR’s 8051 Embedded Workbench. The EW offers an optimized compiler, a comprehensive debugger, integrated static analysis tools, and more. For documentation, to learn more, or to download please visit. 8-bit Knowledge Base Storage Time and Handling Considerations for C8051Fxxx/EFM8 devices Debugging Reset Cause on a Silicon Labs 8-bit MCU Attaching C8051F930-GDI devices in die/wafer form to a PCB F41x Comparators RST pin The difference of I2C Slave module among EFM8BB3 EFM8BB2 EFM8LB1 EFM8UB1 The role of STALL handshake packet in USB transfer. AN945 Bootloader on non-bootloader enabled devices I2C Pull-up resistor calculation IAR Embedded Workbench Support for EFM8
https://www.silabs.com/community/mcu/8-bit/knowledge-base?filter=added+gt+%272017-06-01T00%3A00%3A00Z%27&filter=added+lt+%272017-06-30T23%3A59%3A59Z%27&filter=isDraft%20ne%20true
CC-MAIN-2020-29
refinedweb
1,429
59.84
Monday October 14, 2013 Go to for Breaking News as it happens Become a Fan on Panthers roll in ‘elimination’ game SPORTS, page 6A TARGETING HUNGER QUICK NEWS Creston Food Crawl HUSH program helps distribute more than 10,000 pounds of deer meat to local families in need ■ By JAKE WADDINGHAM CNA staff reporter jwaddingham@crestonnews.com Despite the program’s name, HUSH is not something the Iowa Department of Natural Resources want area hunters to keep quiet about this hunting season. HUSH — Help Us Stop Hunger — is a combined effort to provide quality red meat to those in need and help control the deer population in the state. This is the 11th year for the program. “HUSH is a simple process from the hunter’s standpoint,” said Iowa DNR HUSH coordinator Jim Coffey. “It is a program that our deer hunters have asked for and the Food Bank of Iowa is just thrilled with it.” Hunters must take a buck or doe legally, tag it, then transport it to one of 88 participating meat lockers in Iowa. Once the locker approves the deer, all the hunter has to do is sign a HUSH card, allowing the processor to connect with the Food Bank of Iowa to combat CNA file photo Two bucks make their way along the fence in a field at Green Valley Lake north of Creston. hunger. The closest participating lockers are Corning Meat Processing and Zeb’s Smokehouse in Bedford. “We normally end up with about 300 donations, last year we were in the 200s,” said Dave Walter, the owner of Corning Meat Processing. “It has been a great experience. Since it began in 2004, we have been near the top in number of donations and pounds.” Last hunting season, 5,281 deer were donated to HUSH, generating about 880,000 meals for Iowa families in need. While the total donations were down last year, Coffey said it is consistent with the decline of the total deer herd in the state. But even though the total deer herd may be smaller, overpopulation is still occurring in rural parts of the state. Making donations Hunters who wish to use the meat for personal use are still contributing to HUSH in an indirect manner. Each tag has a $1 charge to help fund the program. “That is the beauty of the HUSH program,” Coffey said. “There is a local connection and it is distributed right back into the community.” The locker receives $75 for each processed deer and the Food Bank of Iowa gets $5 ad- ministrative cost for each deer distributed. The $1 charge to each tag helps cover the expense. Walter said it takes about 15 minutes to prep a deer that has been donated to HUSH. Corning Meat Processing is connected with the Presbyterian church in Creston and the MATURA office in Corning to help get the meat to those in need. Last year more than 10,000 pounds of red meat were distributed. “It blew my mind the number of people that needed it (the meat),” Walter said. Matt Carr, 22, hunts in Ringgold, Union and Adair counties during bow and late muzzleloader season. He has donated to HUSH an estimated 15 times since he started hunting. “I like deer hunting, but we don’t eat a lot of deer (meat),” Carr said. “It gives me a place to take the meat to help feed less fortunate families and it helps take more does out of the herd.” Coffey said hunters should contact the locker where plan they to donate. Most of the time the only requirement is to field dress the kill, but accepting the meat is at the locker’s Please see HUSH, Page 2 The inaugural uptown Creston Food Crawl is slated 4 to 7 p.m. Saturday. The event will include samples of culinary delights, live music and a chance to meet local farmers and growers. “This is a family friendly event,” said Alexi Groumoutis, local foods coordinator for the Southern Iowa Local Food Council. “It’s my goal to introduce the community to local farmers and also implement local foods into institutions. It’s taking a step beyond going to the farmers’ market. We want businesses to use locally-sourced Groumoutis foods.” Five businesses are participating in the event. They include: Creston Chamber of Commerce, Upper Crust Bakery, making sweet potato créme brule with walnuts, A&G Pizza Steakhouse with spinach alfredo pizza, Adams Street Espresso with hot apple cider or pumpkin frappucino and Elm’s Club with cheese steak sliders with green peppers and onions. Tickets are $10, and are available now at any participating business. For more information, contact Groumoutis at 515-447-3952. Government shutdown WASHINGTON — Washington is deadlocked as it enters the 14th day of a partial government shutdown. The crisis has already led to furloughs of 350,000 federal workers, canceled military training missions and slowed economic growth. NBC reports multiple talks be- Obama tween Republican congressional leaders and President Barack Obama took place over the past two weeks,. Dream Big contest voting update CNA photo by JAKE WADDINGHAM Fall colors: The trees around Green Valley Lake north of Creston are slowly changing to fall colors. The 2013 fall foliage outlook predicts Iowa colors will be average this year. Three Creston businesses are participating in the Dream Big, Grow Here grant competition. Those business are: Upper Crust Culinary Creations, She-Nae’s Rescued and Renewed and Leslie’s Dance Emporium (Defy Gravity). Voting began last week. In this contest, the top five vote-getters at the end of the voting period (Oct. 21) in the south-central Iowa competition will be invited to compete in a pitchoff event — slated Nov. 12 at Lakeside Casino — for a chance at winning $5,000 toward their dream. The winner of the regional Business 1) Afton Fitness Center 2) Podium Ink 3) Upper Crust Culinary Creations 4) Play Posse 5) She-Nae’s 6) Sew and Sew 7) Speed Moto Company 8) Leslie’s Dance (Defy Gravity) 9) Country Cabins 10) Alley Diesel Performance 11) Showstock Planet City Afton Mount Ayr Creston Mount Ayr Creston Lamoni Lamoni Creston Chariton Corydon Woodburn Votes 1,390 960 515 439 296 276 239 181 180 59 13 • To vote, go to. Voting is allowed once each day. Voting ends Oct. 21. contest will then advance to a state competition slated for spring 2014 where winners will have a chance to win an additional $10,000. To vote, go to www. dreambiggrowhere.com Voting is allowed once each day. Voting ends Oct. 21.. 88 Copyright 2013 In person: Mail: Phone: Fax: E-mail: 503 W. Adams Street Box 126, Creston, IA 50801-0126 641-782-2141 641-782-6628 news@crestonnews.com Contents Classified . . . . . . . . . . . . . 8-9 Comics . . . . . . . . . . . . . . . . . 4 Crossword . . . . . . . . . . . . . . 4 Deaths. . . . . . . . . . . . . . . . . . 2 Heloise Tips . . . . . . . . . . . . . 4 Local . . . . . . . . . . . . . . . . . . . 3 Sports . . . . . . . . . . . . . . . . 6-8 Tuesday weather High 50 Low 40 Full weather report, 3A 2A Creston News Advertiser Monday, October 14, 2013 Deaths Mary Josephine Cihak Creston Mary Josephine Cihak, 90, of Creston died Oct. 11, 2013, at Creston Nursing and Rehab. Graveside servic- Cihak es will be 10:30 a.m. Tuesday, Oct. 15, at Calvary Cemetery. The Rev. Kenneth Halbur Jr., Holy Spirit Catholic Church, will officiate. Open visitation will be 2 to 8 p.m. today at Pearson Family Funeral Service, 809 W. Montgomery St. The family requests no memorials. Online condolences may be left at. Mary Josephine Cihak, daughter of Josephine Mary (Deholien) and James Vincent Cihak, was born Jan. 3, 1923, in Creston. Mary Josephine graduated from Creston High School in G.R. “Bob” Hopkins Creston George Robert “Bob” Hopkins, 93, of Creston, passed a w a y Wednesday, October 9, 2013 at Hopkins). Online condolences can be made under the obituary category at. com. George Robert “Bob” Hopkins was born February 19, 1920 in Creston, Iowa, to Mary Josephine (Pfeiffer) and George Bierce Hopkins. He graduated from Creston High School in 1938 and Creston Junior College in 1940. He enjoyed working at the cone shop on Russell Street during those years and had many memories from that. Employment at Montgomery Ward was interrupted with WWII. After training different cadres, Bob saw active duty in France and Germany from 1942 – 1946. He began his 34-year career with the 1940. She received her bachelor’s degree from the University of Iowa in 1945; and she received her master’s degree from the University of Southern California in 1951. Mary Josephine taught school in Bakersfield, Calif., before moving to Council Bluffs, in 1969. She worked for the Council Bluffs school system as a reading specialist for 11 years, retiring in 1980. Mary Josephine taught both public and parochial school in Creston and various other places. Mary Josephine was a past member of American Association of Women, Kappa Kappa Iota Educational Sorority and Iowa Retired Teachers Association. Mary Josephine is survived by her brother James Cihak of Creston; nephews, Mike (Joy) Cihak of Afton and Greg Cihak of Indianola; and niece Julie (husband Mark Cullen) Cihak of West Des Moines. Mary Josephine was preceded in death by her parents and sister-in-law Verlee Cihak. postal service in 1948 as a clerk and eventually became a rural mail carrier. Many people in the area will remember him as their mailman. During a lonely time in his life, Bob decided to participate in the Refugee Program sponsored by Gov. Robert Ray. His first family included three generations. The Ly family has continued to stay in touch with Bob, appreciating his efforts to help all of them. Another family from Laos stayed with Bob for a brief time. July 18, 1982, Bob married Judy from Cromwell and they joined two families to include eight children. This was their world for the next 31 years. They added two exchange students during that time: Karino Schmelzer from Germany and Miho Aoyagi from Japan. Bob loved to hunt, fish in his own pond with kids and grandkids and teach all of them his creed: “Work is so rewarding!” He was a member of Holy Spirit Catholic Church, and the Creston VFW Post #1797. Survivors include his wife, Judy, four sons: Mike (Victoria) of Truth or Consequences, NM; Mark (Pamela) of Naples, FL; Matt (Amy) of Missoula, MT; and Josh (Beth) of Orient, IA; four daughters: Dr. Penelope Isabella of Loveland, OH; Dr. Mary (Mick Gill) Hopkins of Howard, CO.; Marty (Fred) McClurg of North Liberty, IA.; Sherri (Mike Hinrichs) Hopkins of Altoona, IA.; ten grandchildren; one greatgranddaughter and his mother-in-law Ocie Hughes of Creston. He was preceded in death by his parents, one daughter, Mary Veronica in infancy; two sisters, Margaret Lesan and Alice Mullen; and one brother in infancy. Margaret Brosnahan Creston Margaret Brosnahan, 86, formerly of Afton and Creston, died Oct. 10, 2013, at Afton Care Center. Mass of Brosnahan Christian Burial will be 10:30 a.m. Wednesday, Oct. 16, at Holy Spirit Catholic Church. The Rev. Ken Halbur will officiate. Burial will be in St. Edward Catholic Cemetery west of Afton. Open visitation will be 2 to 8 p.m. Tuesday with family present 6 to 7:30 p.m. with a 7:30 p.m. Rosary at Powers Funeral Home, junction of highways 34 and 25. Memorials may be given in her name to Holy Spirit Catholic Church. Online condolences may be left at. Margaret Mary Brosnahan, daughter of Ella (Shope) and John Carroll, was born Dec. 21, 1926, in Des Moines. Margaret graduated from St. Joseph Academy, then continued her education at Drake University. While attending Drake, Margaret worked for Northwest Bell in Des Moines. On Dec. 27, 1951, Margaret married James “Jim” Brosnahan at St. John’s Chapel in Springfield, Mo. The couple made their first home in Des Moines. After starting a family, Jim and Margaret moved to Afton, where Margaret became a full-time homemaker. Jim and Margaret moved to Creston in 1992. After Jim’s death May 15, 1997, Margaret remained in William Tridle IV Milford William “Willy” Tridle IV, 30, of Milford, formerly of Orient, died Oct. 11, 2013, in Milford. Funeral services will be 3 p.m. Wednesday, Oct. 16, Creston until January 2006, at which time she moved to the Bishop Drumm complex in Johnston. In February 2013, she moved to Afton Care Center. Margaret was a former member of Holy Spirit Catholic Church in Creston and St. Edward Catholic Church in Afton. While living in Creston, she volunteered at Rectory Rerun and with Greater Regional Auxiliary. She was a former member of Catholic Daughters, St. Edward Altar Society, Doll Club and several neighborhood clubs. Margaret is survived by her children, Mary (Cal) Hultman of West Des Moines, Patricia (Warren) Meyer of Houston, Texas, Sheila (Keith) Brown of Afton, John (Barbara) Brosnahan of Arlington, Va., and Catherine (Mike) Huber of Omaha, Neb.; grandchildren, Christopher Hultman (fiancée Victoria Petrucco), Erin (William) Lopez, Rachel Meyer, Justin (Amber) Brown, Matthew Brown, Sarah Brosnahan, Anna Brosnahan and Noah Huber; step-grandchildren, Cal Hultman II and Dan (Emily) Hultman; seven great-grandchildren; sisters, Eleanor McLaughlin and Theresa (Larry) Contri, all of Des Moines and Rose (Joe) O’Reilly of Chariton; and several nieces and nephews. In addition to her husband, Margaret was preceded in death by her parents; sisters, Mary Kalinowski, Frances Knight and Agnes Jolly; sister-in-law Ann Brosnahan; brothers-in-law, Chester Kalinowski, Richard Knight, Lee Jolly, Richard McLaughlin, and Fr. William Bresnahan and four nieces. CNA photo by KYLE WILSON Going deep: Logan Anson, 8, of Creston leans back and fires a spiral to his father Joe Anson Sunday evening at Panther Field. The father-son combo was practicing several football passing patterns including the post, buttonhook and Logan’s favorite “the deep ball.” HUSH: Continued from Page 1 discretion. Corning Meat Processing, for example, is open extra hours starting in December. Walter goes in Saturday and Sunday from 4 to 7 p.m. “It makes it easier on the hunter to take it in on Saturday and Sunday,” Walter said. “And it makes it easier on me so I don’t have to catch up on Monday.” Since Christmas falls in the middle of the week this No time to call to place your classified ad? at Orient United Methodist Church. The Rev. Cathy Nutting will officiate. Burial will be in Rose Hill Cemetery in Nevenville at a later date. Visitation with family present will be 1 to 3 p.m. Wednesday at the church. Steen Funeral Home, Greenfield, is in charge of arrangements. NO PROBLEM! Go to www. crestonnewsadvertiser. com 1 Large Single Topping $9.95 + tax 1 Large Specialty $14.95 + tax Carry-out Only A&G Steakhouse & Lounge 211 W. Adams • Creston • 782-7871 Wayne, NE 888-200-4460 kfast e Iowa Brea NOW IS THE TIME TO BUILD!!! $ag6e n.ot9in9lcuded Build Your New Home for as little as $1,000 down & lock in the interest rate of a lifetime! Bever RESIDENTIAL CONSTRUCTION & PERMANENT MORTGAGE FINANCING AVAILABLE RestauRant and spoRts baR Hwy. 34 • Creston, IA • 641-782-5014 DW WODQWLF $ LQ V HHWX Pizza Specials Go under classifieds to place an ad October is Pork Month Th Loin, aded Grilled Lightly Bre toes, Toast 2 Eggs, Pota year, Corning Meat Processing will not be open on Christmas Eve or Christmas Day. So after local hunters find their trophy kill this season, Coffey encouraged hunters to pick up an extra tag to make a contribution to the program. “Any deer that is harvested on any tag may be donated,” Coffey said. “You do not have to be a resident of the county you donate.” Homestead has helped thousands of families build their new home for less. Open Sept. 21 - Oct. 27 0 Fridays 12 to 6 Saturdays 10 to 6 Sundays 12 to 6 Free Entry! Corn Maze - $2 Kids Play Area - $2 Hay Rack Rides - $1 t Us ac nt Co Located at Jct. Hwy 6 & 71 in Atlantic Get current info on Facebook or Please join us for Brunch! Greater Regional will offer a FREE Brunch & Learn session There’s a new name in health insurance. Featuring: Dr. Robert Filippone, DO; Radiologist Monday, October 21 10-11 AM Insured or not, better health insurance options are here. Find us on the new Marketplace or through your local agent or broker. and Monday, October 28 10-11 AM Our experts can help! Open enrollment has begun. Choose a plan that’s right for you. 641-782-3508 C-00139 (10/ 13) Call 1.866.217.6111 Call today to schedule your appointment! Seating is limited. Informational displays and registration begin at 9:30 AM. Seminar begins at 10 AM. A light brunch will be served. Please pre-register at 641-782-3828. Creston News Advertiser Monday, October 14, 2013 LOCAL 3A Today's Weather Local 5-Day Forecast Wed Tue 10/15 50/40 To place an item in the Almanac, call the CNA news department, 782-2141, Ext. 234. Cottonwood Rd. Open meeting. AA, 7:30 p.m., United Church of Christ, 501 W. Montgomery St. Use east door. Schedule of driver’s license examiners: Bedford: Monday through Friday, 8:30 a.m. to 3:30 p.m., treasurer’s office, Taylor County Courthouse, 407 Jefferson St. Corning: Monday through Friday, 8:30 a.m. to 4 p.m., treasurer’s office, Adams County Courthouse. Driving tests on Wednesday mornings by appointment. Creston: Monday through Friday, 8:30 a.m. to 4 p.m., treasurer’s office, Union County Courthouse, 300 N. Pine St. Driving tests Wednesdays. Call 782-1710 for an appointment. Greenfield: Monday through Friday, 8 a.m. to 4 p.m., treasurer’s office, Adair County Courthouse, 400 Public Square. Mount Ayr: Mondway through Friday, 9 a.m. to 3 p.m., treasurer’s office, Ringgold County Courthouse, 109 W. Madison St. Osceola: Monday through Friday, 8:30 a.m. to 3:30 p.m., treasurer’s office, Clarke County Courthouse, 100 S. Main St. Winterset: Monday through Friday, 8:15 a.m. to 3:45 p.m., Madison County Courthouse, 112 N. John Wayne Drive.., 107 W. Howard St. Chautauqua Club field trip, 1:30 p.m., Miss Spencer’s Tea Room in New Virginia. Meet at Creston United Methodist Church. Program Committee Peg Anderson, Carolyn Biere and Pat Butcher, hostesses. Free community meal, 5 to 6 p.m., United Church of Christ (Congregational), 501 W. Montgomery St. Creston City Council, 6 p.m., council chambers, restored Creston Depot. Alcoholics Anonymous (AA), 7:30 p.m. closed meeting, St. Malachy Rectory, 407 W. Clark St. 55/35 Times of sun and clouds. Highs in the mid 50s and lows in the mid 30s. Sunrise Sunset 7:29 AM 6:36 PM Sunrise Sunset 7:30 AM 6:35 PM Tue Creston City Council, 6 p.m. Tuesday, council chambers, restored Creston Depot. Agenda includes: public forum; set public hearing for 6 p.m. Nov. 5 for the purpose of accepting comment on an application for a community development block grant for north side sanitary sewer improvements; and for the purpose of accepting comment on Ordinance 14-148 in the Code of Ordinances of the city of Creston, 1996, by amending Chapter 62 - general traffic regulations, Section 62.01 - violation of regulations, subsections 1-18 - provisions pertaining to speed regulations; discuss engineering contract with Snyder and Associates for Hurley Creek improvement project based on recommendation of Park and Recreation Board; special assess nuisance abatement charges because of nonpayment by property owners; set public hearing 6 p.m. Nov. 5 for an easement request to place a metal carport in city right of way in front of 1102 N. Walnut; placement of new street light in 800 block of North Lincoln Street; acquire abandoned property at 302 1/2 S. Birch via Iowa Code 657A.10A; street closings for the Lighted Christmas Parade and fireworks display 6 p.m. Dec. 5 until parade is finished and 7 p.m. until fireworks are finished. Police Galen Ray Zumbach, 1201 Country Club Dr., reported his garage door was damaged between 5 p.m. Friday and 6 a.m. Saturday. Damage estimate is $1,001. Miscellaneous Assistance, 5:17 a.m., Friday, North Pine Street. Parking complaint, 9:02 a.m., Friday, North Elm Street. Alarm, 9:49 a.m., Friday, North Walnut Street. Harassing communication, 10:04 a.m., Friday, North Pine Street. Partly cloudy. Highs in the low 60s and lows in the upper 30s. Sunrise Sunset 7:31 AM 6:33 PM Sunrise Sunset 7:33 AM 6:32 PM Sunrise Sunset 7:34 AM 6:30 PM Wed Free Estimates • Insured • References City Algona Atlantic Aubudon Cedar Rapids Centerville Clarinda Clarion Clinton Council Bluffs Creston Hi 50 49 51 62 57 53 52 65 52 50 Lo 40 39 41 46 43 39 40 47 39 40 National Cities City Atlanta Boston Chicago Dallas Denver Hi 75 67 64 75 42 Lo 57 51 46 55 28 City Marshaltown Mason City Onawa Oskaloosa Ottumwa Red Oak Sioux Center Sioux City Spencer Waterloo Hi 55 53 51 58 59 50 49 51 51 56 Lo 42 40 40 42 45 41 38 38 37 43 Cond. cloudy cloudy rain pt sunny pt sunny cloudy rain rain rain pt sunny Cond. cloudy mst sunny rain t-storm pt sunny City Houston Los Angeles Miami Minneapolis New York Hi 88 81 86 52 71 Lo 67 58 70 41 60 Cond. t-storm sunny pt sunny rain mst sunny City Phoenix San Francisco Seattle St. Louis Washington, DC Hi 81 73 61 72 73 Lo 58 55 43 47 62 Cond. sunny sunny sunny rain mst sunny Sioux City 51/38 10/15 Full Oct 18 Oct 26 New Nov 3 ©2010 American Profile Hometown Content Service Wed Thu 10/16 Fri 10/17 10/18 Urbandale, IA 50322 10/19 11 Creston 50/40 South Lincoln Street. Animal call, 11:44 a.m., Sunday, Stone Street. Talk to officer, 3:33 p.m., Sunday, West Montgomery Street. Domestic dispute, 4:27 p.m., Sunday, West Montgomery Street. Theft, 4:41 p.m., Sunday, North Pine Street. City HiSunday, Lo Cond. Theft, 6:17 p.m., North Algona 50 40 rain Pine Street. Found property, p.m., Atlantic 49 396:32 cloudy Sunday, AubudonNorth Pine 51Street. 41 cloudy Disturbing Cedar Rapids the62 peace, 46 rain 12:57 a.m., today, South Centerville 57Poplar 43 pt Street. sunny Area Cities Clarinda Fire Clarion 53 39 windy 52 40 cloudy Clinton 65 47 rain Miscellaneous Council Bluffs10:1452p.m., 39 cloudy Medical, Friday, Creston Grand Avenue. 50 40 windy Medical, 5:45 p.m., Saturday, West Mills Street. Medical, 10:15 a.m., Sunday, North City Division Street. Hi Lo Cond. Medical, 5:43 75 p.m., Sunday, Atlanta 57 cloudy North Walnut Street. Boston 67 51 mst sunny National Cities Chicago 64 46 rain Accident Dallas 75 55 t-storm No citations42were Denver 28 pt issued sunny after an accident 6:44 a.m. Friday on High and Dry Road. JD Conard of Lenox was Moon Phases driving a 2004 Oldsmobile north on High and Dry when a deer ran in front of the vehicle. Conard struck the deer. Damage First estimate Full is $2,500 Last Oct 11 Oct 18 Oct 26 to Conard’s vehicle. It’s good to know that American Family Insurance is on your team. You’ll get reliable protection and solid agent service especially during crunch when you need it most. It’s good time to know that American We’re Insurance in the game, so you Family is on yourcome team.out feeling a winner. You’ll getlike reliable protection and solid agent service especially during crunch time when you need it most. We’re in the game, so you come out feeling like a winner. IN THE GAME. Katie Turner Agency 800 South Birch Street Creston, IA 50801 (641) 782-7471 Katie Turner Agency (800) 432-6940 800 South Birch Street Creston, IA 50801 (641) 782-7471 (800) 432-6940 American Family Mutual Insurance Company and its Subsidiaries American Family Insurance Company Home Office – Madison, WI 53783 Iowa’s Pick 3: 5-0-4 Iowa Cash Game: 5-7-15-19-28 Hot Lotto: 4-14-28-39-46 (2) Powerball: 8-10-26-57-58 (4) Sat 0 The UV Index is measured on a 0 11 number scale, with a higher UV Index showing the need for greater skin protection. amfam.com American Family Mutual Insurance Company and its SubsidiariesNA-20316 – Rev. 4/09 © 2009 American Family Insurance Company Home Office – Madison, WI 53783 amfam.com © 2009 City Davenport Des Moines Dubuque Farmington Fort Dodge Ft Madison Guttenberg Keokuk Lansing LeMars Hi 64 54 64 63 50 64 60 66 62 46 Lo Cond. 47 rain 45 cloudy 49 rain 45 pt sunny 42 cloudy 48 rain 47 rain 46 rain 48 rain 38 rain City Marshaltown Mason City Onawa Oskaloosa Ottumwa Red Oak Sioux Center Sioux City Spencer Waterloo Hi Lo Cond. 55 42 cloudy 53 40 cloudy 51 40 rain 58 42 pt sunny 59 45 pt sunny 50 41 cloudy 49 38 rain 51 38 rain 51 37 rain 56 43 pt sunny City Houston Los Angeles Miami Minneapolis New York Hi 88 81 86 52 71 Lo Cond. 67 t-storm 58 sunny 70 pt sunny 41 rain 60 mst sunny City Hi Lo Cond. Phoenix 81 58 sunny San Francisco 73 55 sunny Seattle 61 43 sunny St. Louis 72 47 rain Washington, DC 73 62 mst sunny CNA photo by KYLE WILSON From left, Creston firefighters Jordan Nelson, Bailey Poolman, Lee Freeman, Ray Ott and Jim Schaffer investigate a possible fire on the roof at Dollar General Store, 703 W. Taylor UV St., Index Saturday evening. According to a Creston Fire report, unit malfuncTue the air Wed conditioning Thu Fri Sat tioned. Black smoke was visible building,10/19 how10/15 10/16 inside 10/17the 10/18 ever, no fire was discovered. Firefighters shut off power 4 4 4 4 4 to the air conditioning unit. No injuries were reported. A Moderate Moderate Moderate Moderate Moderate damage Newestimate was not given. The Greater Regional Medical Center radiology department will offer an educational session Oct. 21, and repeat the session Oct. 28, on the new 3D mammography technology now available at Greater Regional. The “brunch and learn” will be held 10 to 11 a.m. in the Medical Arts Plaza Conference Room and features Dr. Robert Filippone, radiologist, and members of the radiology staff. Seating is limited and preregistration is requested by calling 641-782-3828. pumpkin pie BLIZZARD 3 Creston News Advertiser Grain prices quoted at 10 a.m. today: • Farmers Co-op, Creston: Corn — $4.34 Soybeans — $12.16 • Gavilon Grain: Corn — $4.28 Soybeans — $12.32 Speciality Sewing and Alterations — 28 Years of Experience • Creston — Call Tami at 641-202-7249 IOWA’S LARGEST Over UNI-Dome 3 Ex 00 ARTS & Crafts SHOW hibit ors BEFORE YOU BUY - CALL QSI! • RESIDENTIAL • COMMERCIAL • AGRICULTURAL • EQUESTRIAN Call for a free estimate • FULLY INSURED • LICENSED ICC GENERAL CONTRACTOR • STEEL ROOF & SIDES • 90 MPH WIND LOAD • 8’ O/C POST-SPACING • 4’ O/C TRUSS SPACING 60´× 120´× 16´ $55,900 UNI-Dome™ Cedar Falls, Iowa OCT. 26-27 Sat. 9 - 5 & Sun. 9 - 4 Adm. $6.00 (10 & under free) 2-day re-entry stamp FREE SHUTTLE BUS FROM NORTH PARKING LOT BOTH DAYS. Parking fee of $3.00 Both Days Over 300 Talented Exhibitors Present & Sell 1,000’s of Unique Creations. Callahan Promotions, Inc., 563-652-4529 Bring this ad to show for $1.00 OFF One Admission • 30LB. TRUSS LOAD • 3PLY LAMINATED POSTS (60 YR. WTY.) • 16 COLORS AVAILABLE Price Includes: 1-30’x16’ slider and 1-3’ entry door 1-800-374-6988 12 oz. size double cheeseburgers Special Occasion • Costumes • Children • Infant Toddlers • Christening • Confirmations A Fantastic shopping event. ® $ 29 Nov 3 The UV Index is measured on a 0 - 0 11 11 number scale, with a higher UV Index showing the need for greater skin protection. More than 13,000 southern Iowans make us a part of their lives each weekday! ©2010 American Profile Hometown Content Service Markets NA-20316 – Rev. 4/09 3D mammography seminar offered (Wainscot not included in the price) Travel charges may apply. 4300 NW Urbandale Drive From Creston Official Weather Station: high past 24 hours (66), low past 24 hours (39) and precipitation ending 7 a.m. today (.0) 4 4 4 4 4 Moderate Moderate Moderate Moderate Moderate Des Moines 54/45 IN THE GAME. Day’s Record Cedar Rapids Lottery 62/46 UV Index Q Quality Manufacturing Corporation has C iimmediate openings ffor the following positions: p QUALITY MANUFACTURING CORPORATION Sunrise Sunset 7:34 AM 6:30 PM Cond. rain cloudy rain pt sunny cloudy rain rain rain rain rain 641-782-2421 R Resumes can bbe submitted b itt d tto h humanresources@qualitymfgcorp.com @ or apply in person between 8am and 1pm at: Sunrise Sunset 7:33 AM 6:32 PM Lo 47 45 49 45 42 48 47 46 48 38 Tue First 60/37 Partly cloudy. Highs in the low 60s and lows in the upper 30s. Hi 64 54 64 63 50 64 60 66 62 46 Moon Phases Oct 11 57/34 City Davenport Des Moines Dubuque Farmington Fort Dodge Ft Madison Guttenberg Keokuk Lansing LeMars Hours: Monday thru Friday • 6:30 a.m. - 5 p.m. > Welder > Press Operator > Laser Operator > Machinist > Electrostatic Painter > Paint Prep > Quality Engineer > Quality Auditor Sat 10/19 Times of sun and clouds. Highs in the upper 50s and lows in the mid 30s. Cond. rain cloudy cloudy rain pt sunny windy cloudy rain cloudy windy 408 W. Montgomery • Creston NOW HIRING! Fri 10/18 62/38 Iowa At A Glance Area Cities “We’re Your Hometown Cleaners” Quality Manufacturing Corporation is a fast paced metal fabrication company located in Urbandale, Iowa. We are seeking highly motivated employees with a strong attention to detail that want to make manufacturing their career. We offer a competitive salary and benet package. 10/17 55/35 ...under NEW Management! Brad Riley 641-344-2064 • 641-349-8455 Jake Riley - 712-621-7064 Thu 10/16 50/40 TOPS 116, 8:30 a.m. weigh-in, 9 a.m. meeting, United Church of Christ (Congregational). J.C. Penney former employees coffee, 9 a.m., The Windrow. Alcoholics Anonymous (AA) brown baggers 12x12 study, noon open meeting, St. Malachy Rectory, 407 W. Clark St. No smoking. “No Job Too Big or Too Small!” 60/37 Times of sun and clouds. Highs in the upper 50s and lows in the mid 30s. Supreme Cleaners B&B Home Improvement 57/34 Times of sun and clouds. Highs in the low 60s and lows in the upper 30s. Windy and becoming Times of sun and Times of sun and City clouds. Highs in the clouds. Highs in the cloudy in Sioux the afterCedar Rapids 51/38 noon. High near mid 50s and lows in 62/46 low 60s and lows in 50F. the mid 30s. the upper 30s. Des Moines Sunrise Sunset Sunrise Sunrise Sunset 54/45 Sunset 7:29 AM 6:36 PM 7:30 AM 6:35 PM 7:31 AM 6:33 PM Creston 50/40 Wednesday Traffic stop, 10:06 a.m., Friday, North Lincoln Street. Harassing communication, 3:04 p.m., Friday, North Sycamore Street. Reckless driving, 3:24 p.m., Friday, North Maple Street. Talk to officer, 4:05 p.m., Friday, North Pine Street. Information, 4:26 p.m., Friday, North Pine Street. Talk to officer, 4:27 p.m., Friday, North Pine Street. Reckless driving, 7:29 p.m., Friday, Highway 34. Suspicious person, 8:34 p.m., Friday, South Sumner Avenue. Traffic stop, 9:03 p.m., Friday, West Prairie Street. Information, 11:30 p.m., Friday, North Sumner Avenue. Shoplifting, 12:44 a.m., Saturday, Laurel Street. Disturbing the peace, 1:32 a.m., Saturday, West Montgomery Street. Possible OWI, 1:41 a.m., Saturday, West Taylor Street. Disorderly conduct, 2 a.m., Saturday, East Fremont Street. Animal call, 2:17 a.m., Saturday, North Division Street. Alarm, 3:15 a.m., Saturday, North Walnut Street. Drive off, 6:20 a.m., Saturday, West Taylor Street. Alarm, 8:44 a.m., Saturday, West Taylor Street. Traffic control, 9:12 a.m., Saturday, West Taylor Street. Reckless driving, 1:11 p.m., Saturday, West Taylor Street. Traffic stop, 1:33 p.m., Saturday, South Livingston Street. Traffic stop, 1:45 p.m., Saturday, East Townline Street. Traffic stop, 1:48 p.m., Saturday, West Townline Street. Traffic stop, 2:03 p.m., Saturday, West Taylor Street. Traffic stop, 4:28 p.m., Saturday, South Cherry Street. Assistance, 6:03 p.m., Saturday, Division Street. Domestic dispute, 9:42 p.m., Saturday, North Walnut Street. Traffic hazard, 12:49 a.m., Sunday, North Sycamore Street. Prowler, 1:04 a.m., Sunday, South Park Street. Transport, 2:18 a.m., Sunday, 10/19 Iowa At A Glance For the record Meeting 62/38 10/15 Sat 10/18 Local 5-Day Forecast Windy and becoming cloudy in the afternoon. High near 50F. Driver’s license Tuesday TOPS No. 1338, 5 p.m., First United Methodist Church. AA, 5:30 p.m., Crossroads Mental Health Center, 1003 Fri 10/17 Today's Weather Almanac Monday Thu 10/16 2 2 4 $ 3 $ 39 5 $ 1/3 lb. each cheeseburgers chickeninstrips a basket Hwy. 34 • Creston 641-782-2826 How to Not Hurt Your Work Injury Claim Each year thousands of Iowans are hurt at work, but many fail to learn: ∙ The Injured Workers Bill of Rights Including Payment of Mileage at $.555 per mile ∙ 5 Things to Know Before Signing Forms or Hiring an Attorney, etc. A New Book reveals these and much more and is being offered at No Cost by Iowa Work Injury Attorney Corey Walker because since 1997 he has seen firsthand the consequences of clients’ costly mistakes. Claim your copy (while supplies last) if you do not have an attorney and Call Now (800)-707-2552, ext. 311 (24 Hour Recording) or go to. 4A Creston News Advertiser Monday, October 14, 2013 ENTERTAINMENT The price remains the same Dear Readers: Here is this week’s SOUND OFF, about the quantity of food in packages: “My pet peeve is manufacturers lowering the quantity of food in their packages and charging the same amount of money. Do they think we don’t notice this? I would prefer that they keep the quantity the same and raise the price a bit. It has happened to coffee, ice cream, cheese, tuna, etc. — Bev in New Hampshire” How right you are, and many readers have complained about this same thing, Bev. It also can throw off a recipe. If you were using a can of tomato sauce before and now there is less sauce in the can, you may have to buy two cans. Folks, check the size of your favorite food item — you may be shocked that it’s smaller! — Heloise FAST FACTS Dear Readers: What to do with one unmatched sock: • Dust window blinds or furniture with it. • Put around a drink for insulation. • Use on a dust mop instead of cloths. Hints from Heloise FAMILY CIRCUS® by Bill Keane LOCKHORNS® by Hoest & Reiner envelope to: Heloise/Vinegar, P.O. Box 795001, San Antonio, TX 78279-5001. A good, multipurpose household cleaner is 1/2 cup white vinegar, 1 pint rubbing alcohol, 1 teaspoon dishwashing liquid and enough water to make a gallon. Put it in a spray bottle and label clearly. — Heloise MOP FOLLOW-UP Dear Heloise: In response to a column regarding how to clean mop heads, I would like to share my tip. I have a sponge mop, and each day as I clean up the kitchen, instead of wetting the mop, I dampen a clean rag and perhaps spray it with a bit of cleaner. I then throw it on the floor and push it around with my dry sponge BEETLE BAILEY® by Greg & Mort Walker mop. When finished, the rag goes in the laundry. I never feel like I am mopping with a dirty tool or dirty water, and my mop lasts for years! — Patti D. in California PEELING A MANGO Dear Heloise: Peeling a mango using a potato peeler makes the task easy and fast. — Jon H., Alexandria, Va. • Make an eye mask out of it. • Cover an armrest on a chair to keep it clean. — Heloise FROST-FREE Dear Heloise: I do not have a garage, so my car is exposed to the elements overnight. During the colder months, I am always dealing with frost on my windows. Do you have any hints to help me? I often am late due to waiting for my windows to defrost. — Kevin in Chicago I do have a hint for you, and it is simple and cheap! Mix a solution of 1/2 vinegar and 1/2 water. Coat each window with the solution, and it should keep them from getting frosted overnight. Vinegar is the perfect product to have around because it is good for so many things! Want to know what else I use it for? Order my pamphlet Heloise’s Fantabulous Vinegar Hints and More. To receive it, send $5 and a long, self(c)2013 by King Features addressed, stamped (66 cents) Syndicate Inc. BLONDIE® by Dean Young Horoscope. (c) 2013 SPORTS The Numbers Game 3 The number of gametying grand slam home runs in postseason history after David Ortiz’s Sunday night. CNA sports writer • lpeterson@crestonnews.com National Digest Sox revive. With their backs against the wall in terms of playoff qualification, the Creston/O-M Panthers responded in a big way Friday with a 49-21 victory over Glenwood. The win improves the Panthers to 3-2 in Class 3A District 1 — now in the position of trying to move up within the top four spots with a victory over Lewis Central in two weeks — and 5-2 overall. Glenwood falls out of contention at 1-3 in district play and 3-4 overall. Meanwhile, Harlan handed Lewis Central its first district loss with a 24-14 victory in a battle for first place. Perry defeated Atlantic 4827 in a game involving teams at the bottom of the standings, and ADM traveled out of the district in a 45-28 loss to Newton. So, while Harlan leads the district at 4-0, LC and ADM are tied for second at 3-1, with Creston/O-M fourth at 3-2. LC still has games against Glenwood and Creston/O-M, while ADM faces Perry and Harlan. If Creston/O-M beats LC, a possible logjam of teams at 4-2 could come down to the point differential tiebreaker to determine placings, if none of the teams own victories over each of the others. This week, the Panthers play a non-district game at home against Winterset (3- 4), a 56-37 loser Friday to Oskaloosa. Fast start The Panthers set the tone early Friday night by scoring on their first play from scrimmage. Taking a direct snap, Trevor Frain bolted 55 yards on a trap play for the game’s first touchdown. Frain was alone in the backfield taking a direct snap, as quarterback Brandon Phipps went in motion. That action left a hole in the Rams’ defense. “They were in man coverage, and when Brandon went outside, their middle linebacker chased him,” Frain said. “So, they had no linebackers there at all. I had a hole I could get three people through.” Both Frain and Phipps said the play was successful despite a mixup at the line of scrimmage. “Some linemen were asking, was it 31 trap or 30 trap?” Phipps said. “So, they ended up blocking the wrong way. But the middle was wide open. Coach (Darrell) Frain made a good call.” From that point on, Glenwood was playing from behind the entire night. Frain also scored on a 15yard run and 15-yard pass from Phipps as the Panthers took a 21-0 lead. He went left on the short-yardage “tank” formation behind lead blockers Adam Baker and Wyatt Thompson, and used some deception for the pass reception. Chiefs rolling KANSAS CITY — The Kansas City Chiefs defense had 10 sacks and three interceptions in a 24-7 win over the Oakland Raiders, improving the Chiefs to 6-0. Jamaal Charles rushed for 78 yards and two touchdowns while Alex Smith passed for 128 yards. All three Pryor interceptions led to Kansas City points including Husain Abduallah’s 44-yard return for a touchdown for the final margin. Pryor threw a 39-yard touchdown pass to Denarius Moore that gave Oakland (2-4) a 7-0 lead in the second quarter. The Chiefs snapped a six-game home losing streak to the Raiders and are 6-0 for the first time since a 9-0 start in 2003. The Chiefs did it in front of a crowd that set the world record for loudest outdoor arena. “Everybody flew to the pitch man (Adam Baker),” Phipps said. “It was a naked (bootleg) and Frain just chip blocked for a second and then was wide open. They had a guy blitzing.” It became 28-0 on a 5-yard run by Baker, the first of his three touchdowns. Baker finished with 140 yards rushing on 17 carries. He later scored on runs of 66 and 24 yards. “On the 66-yarder, right when he hit the hole I just threw my arms up in the air,” Phipps said. “I knew that was a touchdown. That 25 kid (Spencer Smith) was fast for them, and he outran Please see PANTHERS, page 7A Intense Panthers understood consequences By LARRY PETERSON CNA sports writer • lpeterson@crestonnews.com Creston/O-M senior Maxx Walters had a busy night, covering Glenwood’s deep passing threat (Brandon Bichel) and running pass routes for the Panther offense. But there was work done before the Panthers even took the field, Walters said, to set the mindset of urgency felt by this year’s seniors. A loss likely would knock Creston/O-M out of the playoff hunt. Cowboys win ARLINGTON, Texas — Dwayne Harris returned a punt 86 yards for a touchdown and set up another score with a 90-yard kickoff return as the Dallas Cowboys handed Robert Griffin III his first loss in his home state with a 3116 over the Redskins (1-4). 5A Panthers roll in ‘elimination’ game By LARRY PETERSON Ortiz Creston News Advertiser Monday, October 14, 2013 Contributed photo by TERRY FREEMAN Creston/O-M’s Bryce Briley (82) escapes his last Ram defender as he returns a kickoff 95 yards for a touchdown after Glenwood had closed the Panther lead to 28-14 in the third quarter. Briley also had a pass interception in the Panthers’ 49-21 victory. Please see CONSEQUENCES, page 8A Wolverines clip Cardinals in district win By JAKE WADDINGHAM CNA staff reporter • jwaddingham@crestonnews. com GREEN. CNA photo by JAKE WADDINGHAM Nodaway Valley running back Zach Brewer (21) breaks a tackle Friday night against Treynor. The senior carried the ball 38 times and rushed for 242 yards in the 35-16 win. While the ground game was the core of NV’s strategy, quarterback Jackson Lamb was able to connect with TJ Bower and Baley Shantz for drive-saving pass plays. Bower averaged 21.3 yards per catch and totaled 64 yards. B a l e y Shantz went for 26 yards and a Bower touchd o w n. 6A Creston News Advertiser Monday, October 14, 2013 Area football Mount Ayr 34, Central Decatur 13 LEON — Mount Ayr forced seven Central Decatur turnovers, helping the Raiders to a 34-13 win over the Cardinals here on Friday. The Raiders picked off CD sophomore quarterback Peyton Pedersen five times and also recovered two fumbles. The five interceptions give Mount Ayr 18 for the season, breaking the school’s single season record. Erik Freed picked Pedersen off three times, while Dolecheck finished with two interceptions, giving him seven for the season, which places him tied for second in Class 1A. “Our defensive secondary is doing a fantastic job,” cohead coach Delwyn Showalter said. “Coach Rinehart and coach Weeda have really worked those guys hard. They’re making some phenomenal plays.” Dolecheck also caught two touchdown passes in the game — one from Jacob Sobotka and one from Jacob Taylor — leading the team with three catches for 102 yards. Freed led the team in rushing with 78 yards, while Riley Weehler picked up 63 yards on the ground. Weehler led the defense with 10 total tackles, while Cody Stackhouse and Caleb Schnoor each finished with nine total tackles. Stackhouse had two sacks, while Tucker Winemiller and Joe Ricker each recorded one sack, as well. Mount Ayr, now 6-1 overall and 3-1 in Class 1A District 7, hosts Ogden on Friday for the team’s senior night and Pink Out event. The game also has big implications for district seeding, as the winner will move into sole possession of second place for the time being. Handy Man Services The two teams met twice last season, with both games being decided by just one touchdown. Mount Ayr 34, Central Decatur 13 MA 14 13 7 0 - 34 CD 0 7 0 6 - 13 1st quarter MA — Erik Freed 4 run (Zach Vanderflught kick), 6:51 MA — Riley Weehler 1 run (Vanderflught kick), 6:14 2nd quarter MA — Lincoln Martin 3 run (Vanderflught kick), 11:02 CD — Kyle Palmer 52 pass from Peyton Pedersen (Sydney Smith kick), 9:34 MA — Kyle Dolecheck 59 pass from Jacob Sobotka (Kick failed), 3:28 3rd quarter MA — Dolecheck 27 pass from Jacob Taylor (Vanderflught kick), 5:34 4th quarter CD — Trenton Wells 50 pass from Pedersen (Kick failed), 3:56 Individual leaders Rushing: MA - Erik Freed 14-78, Riley Weehler 9-63, Lincoln Martin 4-44, Cody Stackhouse 12-24, Jacob Sobotka 5-22, Caleb Schnoor 3-13, Grant Staats 3-6, Jacob Taylor 2-4. CD - Trenton Wells 7-16, Peyton Gilbert 6-15, Wyman Reed 3-9. Passing: MA - Jacob Sobotka 4-7-0 for 69 yards, Jacob Taylor 5-13-0 for 92 yards. CD - Peyton Pedersen 11-27-5 for 227 yards. Receiving: MA Kyle Dolecheck 3-102, Lincoln Martin 3-36, Grant Staats 2-15, Riley Weehler 1-8. CD - Trenton Wells 4-131, Kyle Palmer 2-60, Daniel Weir 2-36. Tackles (solo-assists): MA - Riley Weehler (3-7), Cody Stackhouse (3-6), Caleb Schnoor (2-7), Joe Ricker (3-2), Jacob Taylor (2-1), Kyle Dolecheck (3-0), Logan Stark (0-3), Grant Staats (3-0), Tucker Winemiller (1-2), Mason Mercer (2-1), Jake Ricker (1-2), Jacob Sobotka (1-1), Lincoln Martin (1-1), Lincoln Lutrick (0-2). Exira/EHK 68, Southwest Valley 13 CORNING — Exira/ EHK showed why it’s the No. 1 team in the state for Eight-Man here on Friday, rolling through Southwest Valley, 68-13. The combination of Trey Sander and Drew Peppers proved to be tough to stop, as the pair combined for 311 rushing yards and six rushing touchdowns. Sander also had punt returns of 64 and 60 yards for touchdowns. “They’re a very good football team,” Southwest Valley head coach Kent Bass said. “Not a lot of weaknesses. Our overall effort early was up to our liking, but on a snap for snap consistency basis, they were just better than we were.” The Spartans held a powerful Southwest Valley rushing attack to just 114 yards on the ground, led by Jordan Stuart’s 55 yards. Zach Barton added 30 yards. They each punched one into the end zone for a touchdown in the loss. “As the playoffs approach, a game like this shows in a playoff atmosphere, every mistake is magnified,” Bass said. “Good teams are going to take advantage of mistakes every time, so limit mistakes.” Southwest Valley (5-2 overall and 4-1 in EightMan District 8) travels to Fremont-Mills (6-1, 5-0) in a game that could potentially cause a three-way tie atop the district standings with a Southwest Valley win. “A big game in the district, obviously,” Bass said. “Fremont has some big, physical kids again. We’ve got to make sure we do our job and trust the kid next to us to do his.” Exira/EHK 68, Southwest Valley 13 E/EHK 21 27 14 6 - 68 SWV 0 0 6 7 - 13 1st quarter E — Trey Sander 60 run (Kick successful), 8:22 E — Drew Peppers 52 run (Kick successful), 6:48 E — Peppers 42 run (Kick successful), 3:24 2nd quarter E — Sander 64 punt return (Kick successful), 9:12 E — Sander 23 run (Kick successful), 8:15 E — Nick Peppers 39 interception return (Kick successful), 4:39 E — Sander 16 run (Kick failed), 1:00 3rd quarter E — Drew Peppers 15 run (Kick failed), 9:23 SWV — Jordan Stuart 28 run (2-pt. failed), 5:26 E — Sander 60 punt return (2-pt. successful), :10 4th quarter SWV — Zach Barton 12 run (Ryan Carlson kick), 9:30 E — Drew Buckholdt 3 run (2-pt. failed), 4:43 Individual leaders Rushing: E - Drew Peppers 10-138, Trey Sander 7-174, Drew Buckholdt 8-38. SWV - Jordan Stuart 10-55, Zach Barton 10-30, Tyler Bissell 8-10, Gunnar McCuen 3-7, Jerad Shires 1-3, Preston Fleharty 2-8. Passing: E - Drew Peppers 1-4-0 for 7 yards. SWV - Jordan Stuart 3-8-1 for 50 yards, Colby Cooper 1-1-0 for 2 yards. Receiving: E - Bret Johnson 1-7. SWV - Kallan Schmelzer 2-27, Nick Spring 1-13, Preston Fleharty 1-2. Tackles (solo-assists): SWV - Kallan Schmelzer (4-2), Zach Barton (3-1), Clint Mullen (3-1), Gunnar McCuen (3-0), Keegan Longabaugh (4-2), Ryan Carlson (1-1), Nick Spring (1-1), Cody McMann (2-1), Buck Schafroth (2-0), Cody Steinbach (2-0). Mormon Trail 30, East Union 12 GARDEN GROVE — East Union had success with its run game here Friday, but just couldn’t find the end zone often enough to keep pace with Mormon Trail in a 30-12 loss. Trevor Barnett carried 26 times for 110 yards, while Mike Ward and Cole Campbell added 52 and 51 yards, respectively. Campbell scored two touchdowns on the ground for the Eagles, now 2-6 overall and 1-4 in Eight Man District 6. “We have done a lot of rushing,” East Union head coach Steve Kinyon said. “The problem is, when you rush, you don’t always get that quick score you need. We were pounding the ball, and just couldn’t get the ball in the end zone.” Mormon Trail built a 22-6 lead by halftime and held on in the second half for the 3012 win. Jakob Horton carried the ball 26 times for 205 yards and four touchdowns for Mormon Trail, which improves to 2-4 overall and 2-3 in District 6. Mike Ward led the Eagles with 16 total tackles, including three solo tackles for loss and one solo sack. Barnett and Brandon Wittstock each recorded 11 total tackles. Kyle Seales recovered a fumble. “The played hard,” Kinyon said about his team. “I watched the film about three times now and it just comes back to little things, and we’ll have to do the little things in order to get better. That’s what it amounts to.” East Union hosts Murray (6-2, 4-1) on Friday. “They have a pretty good rushing attack,” Kinyon said. “Their special teams are really good. We have to be ready for that.” Mormon Trail 30, East Union 12 Individual leaders Rushing: EU - Trevor Barnett 26-110, Mike Ward 16-52, Cole Campbell 15-51. MT - Jakob Horton 26-205, Andrew Reed 10-51, Joe Minnick 2-24. Passing: EU - Cole Campbell 2-7-1 for 3 yards. MT - Kelton Stansberry 0-4-0 for 0 yards. Receiving: EU - Jesse Akers 1-2, Mike Ward 1-1. Tackles (solo-assists): EU - Mike Ward (4-12), Brandon Wittstock (5-6), Trevor Barnett (3-8), Tyler Kelley (2-3), Brogan Kinyon (2-3), Garrett Oliver (0-5), Kyle Seales (1-4), Devon Anderson (2-0), Cole Campbell (1-1), Mason Gossman (0-2), Cole Oliver (1-1). Murray 51, Colo-Nesco 18 MURRAY — Murray stayed in contention for second place in Eight-Man District 6 with a 51-18 victory over Colo-Nesco to close out the home portion of the regular season Friday. Murray improved to 6-2 overall and 4-1 in district play, while Colo-Nesco fell to 2-6 and 2-4. It was the final regular season appearance at Mustang Field for seniors Cole Weir, Tyler Cashman, Collin Titus, Alex Shaw, Sam Rockhold, Andrew Rider, George Barber and man- ager Dakota Matlage. Rider surpassed 1,000 yards rushing in a season for the second time in his career. He had 1,072 yards as a sophomore and 851 last year when his season was cut short by injury. Rider rushed for 169 yards and four touchdowns on 14 carries. Seth Nerness carried 10 times for 82 yards and two TDs. Rockhold opened the scoring with a 40-yard run. “It wasn’t our sharpest game of the year, but we ran the ball well and the victory kept us in position to capture second place in the district,” said coach Keith Shields. Thane Simmons mixed in 64 yards passing on 6-of-11 with no interceptions. Leading the defensive charge was Barber with 14 total tackles, including seven for losses and a quarterback sack. Nerness had two tackles for losses with a fumble recovery, and Rockhold had nine tackles with an interception. “Other than one long run in the first quarter, our defense did a great job of stopping the Colo-Nesco offense in the first half,” Shields said. “They had one 58-yard run, but only had 54 yards of offense on 25 plays the rest of the half. Our run defense played well in the second half, giving up 38 yards on 21 carries.” The Mustangs play at East Union (2-6, 1-4) this week. Murray 51, Colo-Nesco 18 C-N 6 0 0 14 - 18 Murray 13 17 14 7 - 51 1st quarter M — Sam Rockhold 40 run (kick failed), 10:03 M — Andrew Rider 7 run (Brandon Short kick), 6:28 C — Taylor Hills 58 run (run failed), 5:15 2nd quarter M — Rider 41 run (Short kick), 6:46 Please see AREA, page 8A Amplify Your Life Call for an appointment (800)233-4327 No Job to Small Painting • Carpentry Floors • Siding • Windows and so much more... the 4th Annual extravaganza holiday to holiday 16 CONVENIENT LOCATIONS IN IOWA Serving Iowa for more than 65 Years! Call Today! Chuck Brightwell 641-202-3599 Cooking, Decorating, Style Show, Tastings and more! Monday, October 28, 2013 State Treasurer Michael L. Fitzgerald’s Each year millions of dollars in unclaimed money are reported to the office of the treasurer of state. All names listed are from the last reporting periods, log on to our website to print your claim form today. Treasure seekers may also write to State Treasurer Michael L. Fitzgerald, Great Iowa Treasure Hunt, Lucas State Office Building, Des Moines, IA 50319. You will be asked to prove ownership. G et your claim form today at g re at i ow at re a s u re h u nt .co m Afton Love Lucinda K, 309 n Douglas St ............................. 2013066229 CreSton Helen Mernaugh Helen , 1203 W Adams St ............... 2013081851 Adamson Kent M., 509 n Birch St .............................. 2013143587 Barncastle Heidi, 306 S. Birch ..................................... 2013019518 Bishop Stephan, 618 n Mulberry St ............................ 2013065313 Blue Janice t, 508 n oak ........................................... 2013009611 Bonebrake ted, 1610 W townline St Ste 100 .............. 2013061865 Brown rosetta M, 907 W Mills Po Box 197 ................. 2013102067 Carter Jared, 2184 Pole rd ........................................ 2013143596 Clymer Bradley, 511 n elm C ...................................... 2013097033 Cron Baseball/Softball Inc, Po Box 522 ........................ 2013082108 Dci environmental, 7217 128th St W ........................... 2013062207 Dempsey Arlis C, 1188 Ivy Ave .................................... 2013210644 Draper Larry C, 420 Livingston Ave ............................ 2013219960 f And M Body Shop, 203 n Division St ........................ 2010075892 fishler erin,................................................................. 2013065936 fletcher William L, C/o Heather orr 240 Myrtle ........... 2013101474 fry Amanda K, 227 S Lincoln ...................................... 2013211261 Grimes Y, 1272 Uban Ave ............................................ 2013006432 Johnson Andrew J, 2851 190th St .............................. 2013211264 Kelly Gayleen, 203 n Division St................................. 2010075892 Krell Steven, 217 S Chnut .......................................... 2013070852 Ksib, 1409 Hwy 34..................................................... 2010143165 Lewkow Paige K, 1773 160th St ................................ 2013203027 Miller Donald e, 1464 Hwy 34 ..................................... 2013150978 Minor ruby I, 620 S 15th ........................................... 2013217420 Minor rue K, 620 S 15th ............................................ 2013217420 oriley emma C, rr #4................................................. 2013137220 Pettit farm, 1757 150th St ......................................... 2013165893 Seley Dustin James, 103 W taylor .............................. 2013178289 Swanson fern t, 1002 Cr Dr ....................................... 2013133495 Thanksgiving | Christmas | New Years Southwestern Community College - Creston Doors Open | 4:30 PM Show Begins | 7:00 PM Advanced Tickets $5 At the Door $8 Tickets available at the following businessess 10/13 Vanblaricum Jennifer, 300 north oak Apt 2 ................ 2012166042 Walsh Jim, 1317 150th St ......................................... 2013063665 Walter robert, 1145 Uban Ave.................................... 2013177088 Willis Jack, 810 W Adams ......................................... 2013070853 Wood russel J, 811 n Sumner Ave ............................. 2013192521 LorIMor Burnside robyn, 1058 Unicorn Ave ............................. 2013062323 Connelly Darl, 3110 rea rd ........................................ 2013219575 oglesbee eva, 1611 Yellow rose Ave .......................... 2013219597 SHAnnon CItY Porter Kerry, 101 Spruce St......................................... 2013137092 tHAYer Mcintire Wylie, 2352 Willow Ave.................................. 2013194340 Every GM Certified Vehicle includes... ★ ★ ★ ★ featuring... 2-Year/30,000-Mile Standard CPO Maintenance Plan 5-Year/100,000 Mile Powertrain Limited Warranty 12 Month/12,000 Mile Bumper-to-Bumper Warranty 24/7 Roadside Assistance and Courtesy Transportation ★ 3-month trials of OnStar® and SiriusXM Satellite Radio® ★ 1.9% Financing Available on ALL GM Certified Vehicles Variety, Style, Price & Service...we’ve got it all! See Our Full Inventory and the Best Deals at... 601 S. Sumner (Jct. Hwys. 34 & 25) • Creston • Monday - Friday 8am - 5:30pm • Saturday 8am-5pm or by appointment Creston News Advertiser Monday, October 14, 2013 7A PANTHERS: Continued from page 5A him.” Frain finished with 67 yards rushing on seven carries. While the Panthers relied mostly on a running attack and special teams plays, Phipps finished 6-of8 passing for 48 yards and one TD. Ram catalyst Quarterback Corey Bertini was the catalyst for the Rams, compiling 160 yards rushing and two touchdowns and also passing for 155 yards and a TD. But, right after his second TD cut the Panther lead to 28-14 in the third quarter, Bryce Briley fielded the Glenwood kickoff and raced 95 yards up the left side for a touchdown to make it Briley 35-14. “There was a seam up the left side,” Briley said. “I didn’t look back or anything. I didn’t want anyone to catch me.” Later in the third quarter, Tayler Pettit pressured Bertini on two consecutive CNA photo by LARRY PETERSON Linebacker Gavin Leith (8) of Creston/O-M returns an interception in the second half of Creston/O-M’s 49-21 victory Friday. The Rams had 393 yards of total offense, but suffered from two interceptions and a lost fumble. Leith also had 10 total tackles and a fumble recovery. plays, followed by an interception by Gavin Leith. On the very next play from scrimmage, Baker raced 66 yards to make it 42-14. Baker’s 24-yard TD scamper in the f o u r t h Baker quarter accounted for the final score at 49-21 after Bertini had thrown to Gage Shatava for a touchdown. Briley also had an interception and 22-yard return in the game. Frain had 17 total tackles and Levi Eblen, switching from outside linebacker to inside linebacker in the absence of Eblen Seth Maitlen from injury, accounted for 15 total tackles. Head coach Brian Morrison said Eblen was a quick study at the new position, as Seth Maitlen was held out with an ankle injury. It was a tough task, as this Ram offense posted 38 points against ADM and 24 against district leader Harlan. “We had a kid who had an hour and a half to learn how to play inside linebacker, and I thought he did a heck of a job,” Mor- rison said. “Especially against that run game. Every game they’ve played has a been a track meet. They’re explosive. Our job was to keep the ball in front of us, and I thought we did a good job of that.” Conner Pals was 7-of-7 on extra points to successfully cap all of the Panther scoring drives. Morrison said it was a good night for the offense, as well. “Our offensive game plan was right on,” he said. “Our assistants did a heck of a job throughout the week, coaching our kids up, and the kids executed tonight. Our kids played smart and played hard.” No letdown After each of the scores while building a 28-0 lead during the first half, the coaches stepped up their intensity to remind the players to maintain their focus, saying “(play like) it’s 0-0!” “We’re trying to not change the mindset of our players,” Morrison said. “We tell them not to relax, because when you relax bad things happen. That’s a mentality we’re trying to teach.” On the other side of the field, Glenwood coach Cory Faust was disappointed. The 1997 graduate of Nodaway Valley, a highly successful coach at AHST before taking over the Rams, said everyone in the program has to get better. Panther summary Creston/O-M 49, Glenwood 21 Points by quarter Glenwood 0 7 14 0 — 21 Creston/O-M 7 21 14 7 — 49 SCORING SUMMARY First Quarter C/O-M — Trevor Frain 55 run (Conner Pals kick), 9:02 Second Quarter C/O-M — Frain 15 run (Pals kick), 7:06 C/O-M — Frain 15 pass from Brandon Phipps (Pals kick), 6:40 C/O-M — Adam Baker 5 run (Pals kick), 3:18 G — Corey Bertini 1 run (Brandon Bichel kick), :31 Third Quarter G — Bertini 48 run (Bichel kick), 8:25 C/O-M — Bryce Briley 95 kickoff return (Pals kick), 8:07 C/O-M — Baker 66 run (Pals kick), 3:21 G — Gage Shatava 25 pass from Bertini (Bichel kick), :25 Fourth Quarter C/O-M — Baker 24 run (Pals kick), 3:15 ————— TEAM STATISTICS Glen C/O-M First downs 23 13 Rushes-yards 52-238 35-258 Passing yards 155 48 Total yards 393 306 Punts-avg. 3-40.3 3-35.0 Return yards 48 193 Fumbles-lost 4-1 0-0 Penalties-yards 5-65 6-73 INDIVIDUAL STATISTICS RUSHING — C/O-M: Adam Baker 17-140, Trevor Frain 7-67, Brandon Phipps 7-47, Conner Pals 1-4. Glen: Corey Bertini 23-160, Spencer Smith 13-45, Gage Shatava 7-33, Logan McCorkle 2-8, Zach Kirkpatrick 2-7. PASSING — C/O-M: Phipps 6-8-0 for 48 yards. Glen: Bertini 13-22-2 for 155 yards. RECEIVING — C/O-M: Jesse Rathe 3-19, Frain 2-18, Maxx Walters 1-11. Glen: Shatava 3-36, Cob Baker 2-25, Brandon Bichel 3-22, McCorkle 1-17, Smith 1-10. PUNTS-AVG. — C/O-M: Gavin Leith 3-35.0. Glen: Bichel 3-40.5. TACKLE LEADERS (solo-assists) — C/O-M: Trevor Frain 5-12, Levi Eblen 4-11, Leith 2-8, J. Rathe 3-6, Bryce Briley 5-4, Alex Chapman 1-6, Jordan Thompson 0-6, Baker 0-5, Cory Rathe 0-5, Walters 2-1, Cameron Leith 0-3, Jake Miller 0-3, Spencer Wray 0-2, Tayler Pettit 1-1. INTERCEPTIONS — Leith 1, Briley 1. PASS BREAKUPS — Frain 1, Walters 1. FUMBLE RECOVERIES — Leith 1. TACKLES FOR LOSS (YARDS) — Pettit 1-6, J. Rathe 1-5. SACKS — Pettit 1. “You have to credit Creston,” Faust said. “They have a good team and they do a nice job. On offense, we had some breakdowns in pass protection and we put the ball on the ground a lot. We’re really struggling to stop GET A “X” BINGO, BUY 6 MONTHS GET 2 MONTHS FREE!!! the run on defense. We’re just not playing very well right now, and we have to coach better. Have to play better.” Glenwood finishes with home games against Lewis Central and Atlantic. EXAMPLE CARDS “X” “Four Corners” GET A “FOUR CORNERS” BINGO, BUY 3 MONTHS GET 2 MONTHS FREE!!! 5 16 35 53 63 4 28 43 53 62 3 30 34 57 62 7 24 35 57 68 11 27 48 72 14 26 38 55 75 7 19 41 59 66 12 20 47 66 10 27 40 56 74 14 25 31 48 75 WATCH THE CRESTON NEWS ADVERTISER EACH DAY FOR 5 NEW BINGO NUMBERS. WINNING SHEETS CAN BE REDEEMED AT CRESTON PUBLISHING COMPANY, 503 W. ADAMS ST., CRESTON, IOWA Limit 1 Offer per CustOmer TODAY’S NUMBERS: B-4 I-26 N-40 G-57 O-70 8A Creston News Advertiser Monday, October 14, 2013 AREA: Continued from page 6A M — Rider 12 run (Short kick), 1:11 M — Short 29 FG, :14 3rd quarter M — Rider 29 run (Short kick), 9:42 M — Seth Nerness 26 run (Short kick), 5:22 4th quarter C — Blake Stevens 7 pass from Hill (pass failed), 6:26 M — Nerness 12 run (Short kick), 5:15 C — Zach Jamison 18 pass from Hill (run failed), 3:04. Individual leaders Rushing: M - Andrew Rider 14-169, Seth Nerness 10-82, Sam Rockhold 1-40, Matt Barnes 3-18, Thane Simmons 5-3, Dylan Callstrom 3-1. C-N - Taylor Hill 23-76, Blake Stevens 8-39, Clayton Bryan 6-14, Grant Tiarks 3-4, Dacian Wilson 1-4. Passing: M - Thane Simmons 6-11-0 for 64 yards. C-N - Hill 4-11-1 for 56 yards; Wilson 1-2-0 for 9 yards. Receiving: M - Sam Rockhold 3-26, Barnes 3-38. C-N - Evan Webb 2-25, Zach Jamison 1-18, Clayton Bryan 1-15, Blake Stevens 1-7. Tackles (solo-assists): M George Barber 10-4, Barnes 7-4, S. Rockhold 5-4, Nerness 3-5, Rider 6-3. Tackles for losses: M - Barber 7, Nerness 2, Barnes 1. Sacks: M - Barber 1. Fumble recoveries: M Nerness 1. Interceptions: M - S. Rockhold 1. CONSEQUENCES: Continued from page 5A “It means everything to us,” Walters said. “This team is family. We knew the consequences if we lost. There were heart-to-heart talks in the locker room. We said, play for us, but play for yourselves, too. You’re never going to get this back.” Phipps said the seniors have talked for years about making a playoff run. The program’s postseason appearances the past two years have ended in open- ing-round defeats. “We’ve all had this dream of being state champions,” quarterback Brandon Phipps said, “and one step is making the playoffs. We’ve had a couple of tough games. Last week (34-7 loss at Harlan) kind of lit a fire under us, really. It motivated us to come out and show that we’re not a bad team. We’re a dang good team when we play together, and that’s what we did.” Coach Brian Morrison said the team is overcom- Spartans 3-1 at DMACC Invite BOONE — Southwestern’s volleyball team continued its winning ways with a 3-1 mark at the DMACC Invitational over the weekend. On Friday the Spartans defeated Highland (Kan.) 25-18, 25-8, 25-21 before sweeping Kishwaukee (Ill.) 25-21, 25-11, 25-11. No. 2-ranked Parkland (Ill.) Community College defeated SWCC Saturday, 25-18, 25-13, 23-25, 25-18. The Spartans finished the event victorious over North Dakota School of Science 25-15, 25-20, 25-17. The Spartans take a 23-9 season record against Grand View JV tonight here. Coach Melissa Blessington said her team played good volleyball over the weekend. “This was a very good weekend for us,” Blessington said. “Playing Parkland was a great experience for our girls, to see that we can ply with teams like that. This tournament was a good measuring stick for us. Last year we went 0-4 against many of the same teams. Our girls felt good about their play and I know we are going to keep fighting to get better.” SWCC 3, Highland (Kan.) 0 Digs — Molly Schimp 21, Janaya Fox 16. Setting (assists) — Cassidy Yong 31. Hitting (kills) — Mel Mangrum 8, Alex Duffy 8, Shelby Sullivan 7, Allyson Kocour 6, Jenna Casey 5. Serving (aces) — Ashley Strong 11-12 (1). SWCC 3, Kishwaukee (Ill.) 0 Digs — Schimp 19, Strong 11, Fox 8. Setting (assists) — C. Yong 30. Hitting (kills) — Casey 11, Mangrum 8. Serving (aces) — C. Yong 16-16 (5), Strong 10-10 (2). Blocks — Duffy 3. Parkland (Ill.) 3, SWCC 0 Digs — Schimp 17, C. Yong 14, Strong 11, Fox 11. Hitting (kills) — Kocour 11, Fox 10, Sullivan 6, Mangrum 6. Serving (aces) — Fox 13-14 (1). Blocks — Duffy 3, Sullivan 3, Mangrum 3. SWCC 3, N. Dakota Science 0 Digs — Schimp 11, Strong 8, Serena Yong 7. Setting (assists) — C. Yong 32. Hitting (kills) — Sullivan 8, Duffy 8, Casey 7, Mangrum 6. Serving (aces) — Strong 13-15 (3), C. Yong 22-24 (5), Fox 12-14 (2). Blocks — Mangrum 5, Sullivan 4. ing some midseason injuries. But, players like Alex Nielsen and Seth Maitlen are on the mend, preparing to play in the district finale against Lewis Central, if not earlier in the last regular season home game against Winterset. “We had a great week of practice and we practiced without pads,” Morrison said. “That helped. We were fresh tonight. We were dinged up across the board. I told the kids, at this point last year we were losing kids to season-ending injuries. This year we’re going to get kids back. We’re 5-2 and we couldn’t be happier with this win tonight.” Walters called the 4921 win “a total confidence booster.” Phipps said there are still accomplishments to shoot for in the upcoming two weeks. “The LC game is huge for us,” Phipps said. “We just need to go out (this week) and keep getting better. We want to get the number two seed and play at home in the playoffs.” More than 13,000 southern Iowans make us a part of their lives each weekday! Creston News Advertiser Temporary winter maintenance positions Iowa Department of Transportation The Iowa DOT is hiring several temporary winter maintenance positions throughout the state. Duties: Operate maintenance/snow removal equipment, such as end loaders, singleor tandem-axle dump trucks equipped with attachments such as snowplow blades. Requirements: Possess and maintain at least a Class B commercial driver’s license with an air brake endorsement and will be subject to drug and alcohol testing requirements. Compensation ranges from $11.39-$15.97 per hour. For more information and to apply visit: The Iowa Department of Transportation is an equal employment opportunity/affirmative action employer. Women, minorities and persons with disabilities are encouraged to apply. DOT’s affirmative action officer. If you need accommodations because of a disability to access the Iowa Department of Transportation’s services, contact the agency’s affirmative action officer at 800-262-0003. NOW HIRING Engineering and Manufacturing ALMACO MACO iis th the lleader d in i providing idi g iinnovative ti solutions l ti tto our clients in the seed research industry. Our niche business provides solutions for planting, harvesting and seed processing for corn and soybeans around the world. We are family owned and operated and are located in Nevada, Iowa. Join our growing team! CURRENT OPENINGS: Administrative Assistants - Engineering New Product Development Director Project/Design Engineers Electrical/Software Control Engineers Software Control Programmers Manufacturing Engineers Electronic Field Service Engineer Field Service Technicians Mechanical Drafters Ag Mechanic/Final Assemblers Welder Fabricators Quality Tester HEALTHCARE REFORM TOWNHALL MEETINGS SCHEDULED IN YOUR AREA Changes are coming in 2014. Are you ready? Learn about your rights and responsibilities under federal law. For More Information: Visit: insuranceca.iowa.gov, Call: 1-877-955-1212, Email affordablecareact@iid.iowa.gov Presented by the Iowa Insurance Division, the Iowa Department of Human Services and the Iowa Department of Public Health Card of Thanks $50 or Less I would like to thank my family and friends for the cards for my 90th birthday. It was also fun to reminisce with so many at the open house. LaVerne Grubbs FOR SALE: OAT STRAW bales, clean heavy bales, $4.50 each, 641-3445369. Lost & Found JOHN DEERE LAWN TRACTOR gas cap found at 1200 W. Prairie St., Creston. Free to owner who can identify, 641-782-4230. Special Events IOWA'S LARGEST ARTS AND CRAFTS SHOW, Oct. 26-27, UNI-Dome, Cedar Falls, Iowa. Over 300 Talented Exhibitors, Sat. 9-5, Sun. 9-4. Adm. $6. A Fantastic Shopping Event. 563-6524529 CLASSIFIED OFFERS a simple solution...if you need a new home, apartment, a better car or the services of an expert repairman. Employment $8.50/HOUR. PT/FT CLERICAL. 20-40 hrs/ week M/F. Candidates must be reliable, selfmotivated, and detail oriented with basic computer skills. Please send resume to Guaranty Abstract Company, PO Box 404, Creston, IA 50801. CAREGIVER NEEDED ORIENT: Assist Senior with non-medical help (bathing, dressing, and housekeeping), M/W/F 9am-12:30pm. Great Business permanent part-time position! $8.75/ hour Services PLUS BONUS! Jamie at MCNEILL TREE SER- Caretech, 800-991VICE. Topping, Trim- 7006. ming and Removal. Free Estimates, insured. Call CHECK 'N GO IS NOW David at 641-344-9052. hiring for a FULL TIME STORE MANAGER in ATTENTION! Creston, IA, please apBE WORRY FREE! ply online at Ron & Dora's Handy- Service provides reers or mowing, snow re- builder.com moval and landscaping, house/property clean Miscellaneous up. Also buy-out Estates and storage units! STEEL BUILDINGS. Call for any questions, Big or Small, Save up to Ron 641-782-0521 or 50%. For best deal Dora 641-782-0520. with contract construcCLARK'S TREE & tion to complete STUMP Removal. Free Source#18X Estimates, Insured. Call 641-782-4907 or 641- 800-964-8335 342-1940. TO OUR $50 or Less READERS Creston Publishing Company does not FOR SALE: PINK knowingly accept adrocker/recliner, $25.00 vertising which is in 641-782-3761. violation of the law. FREE: STYROFOAM IN- We do not knowingly advertising SULATION approx. 12 accept pieces of 12”x5ft. that is fraudulent or square. From tornado has malicious intent. While we attempt damaged Middle School roof. The Bookstore to screen advertising with potential of 641-782-8822. THREE HALLOWEEN fraud, it is impossible costumes, 1 blond wig to screen all potential $5.00, poodle skirt problems. We strongly en$10.00, zebra ears and tail, new never opened, courage readers to $12.00; Black LG cell exercise caution and phone with adapter common sense, par$10.00; women's re- ticularly when dealing versible (pink/teal) coat, with unfamiliar companies. $15.00; 641-782-6144. Church Dinner — Hosted by Creston United Methodist Women — Apply online at EOE 400 N. Elm Street • Creston Sunday, Oct. 20th from 11:30 to 1:00 Join us for: Roast Beef or Roast Pork, Mashed Potatoes, Corn, Beans, Rolls, Variety of Salads and Desserts, Beverages included. $ 8 for Adults • $5 for Children (5-11) Free for Children under 5 Our local pork producers do a great job for our community. So in honor of National Pork Month, we have great prices on Frigidaire freezers...stop in and check it out! 7 Cu. Ft. Chest 17 Cu. Ft. Upright Model FFFC0723DW Model FFU17M7 279 $ All freezers on sale now!! 599 $ 15 Cu. Ft. Chest Help Wanted Full Time Charge Nurse RN Preferred Housekeeping Part Time CNA Apply in Person EOE - Drug Free Employer 1000 E. Howard • Creston 782-5012 JOB OPENINGS Model GLFC1526DW for 21 Cu. Ft. Upright 14 Cu. Ft. Upright Model FFU21M7 Model FFU14M5 499 $ 499 699 $ $ Manual Defrost Other sizes and models also available...check out our selection! Cook Video & Appliance — Financing Available with Qualified Credit — Hwy. 34 East • Creston • 641-782-5112 Mon-Fri 8:30am - 5:30pm • Sat 8:30am - 2pm matura Licensed Substitute Teacher with a love of early childhood to serve children and families in Bedford for a 4-6 week time period. Long-Term Licensed Substitute Teacher with a love of early childhood to serve children and families in Creston. Wrap-Around Classroom Aide to work 25-30 hours per week before and after preschool programming. Send applications or resumé to the: matura Head Start Office 209 N. Elm St., Creston, Iowa 50801 641-782-6201 or e-mail to skeller@maturaheadstart.org Position open until filled. E.O.E. Creston News Advertiser Monday, October 14, 2013 For Rent For Sale APARTMENTS FOR RENT in Afton: nice efficiency, $350/mo.; nice 2-bedroom apartment, $475/mo.; stove/refrigerator furnished, references required, washer/dryer on premises, 641-3445478. AMMUNITION FOR SALE: Winchester 40 gally allowed expenses s/w- $20.00/box 50rds paid. Bill & Debbie Herters 308-$18.00/box 800-311-6090 (INCN) 20rds, Call 641-344AUCTION 9991 for more info. Antler Taxidermy SportHEDGE POSTS, Circle T Ranch, Kellerton, ing & Gun Consignment Auction October 641-278-0296. LARGE TV ENTERTAIN- 25-27 Jackson County MENT center, dark wal- Fair Grounds Maquonut color, $175.00, keta Iowa For More 641-337-5644. Info On Buying Or Selling Contact Steve Statewides Morehead 563 652 9780 (INCN) ADOPTION HELP WANTED- TRUCK ADOPTION: Childless, DRIVER loving couple pray to Best lease purchase adopt. Stay at home in the industry with mom, successful dad, $.99/gal. diesel fuel, great dogs & devoted $100 weekly bonus, grandparents. Lenew trucks, top pay, 3-BEDROOM 2 BATH mobile home on 3 acres, set up for horses by Greenfield, $700/month +$700 deposit and utilities, references required, 402-721-2313, leave msg. ADAMS STREET COUNTRY LODGE, extended stays welcome, 641-344-5478. New Today GROUND FLOOR apartment, $365/mo., utilities paid, 641-344-8191. VERY NICE 3 BEDROOM home with basement, 1 car garage, large yard NE Creston, no pets or smoking, $800/mo., 641-344-6228. Wanted WANTED: TUTOR for high school math, 641782-6599. Statewides CARRIER NEEDED Auction Calendar case manager position Complete sale information is published in the Statewides and great freight lanes. Hirschbach 888-514-6005 or www. drive4hml.com (INCN) TanTara Transportation is now hiring OTR Company Flatbed Drivers and Owner Operators. Competitive Pay and Home Time. Call us @ 800-650-0292 or apply online at www. tantara.us (INCN) Wednesday edition of the Creston News Advertiser and/or the Southwest Iowa Advertiser Sat. Oct. 19- 10:00AM Creston, IA. Real Estate, Household, Collectibles and Misc. for Pauline J. Dunn Estate. Auctioneers: Tom Frey, Darwin West, Todd Crill. Sun. Oct. 20- 11:00AM Creston, IA. Tools, Household, Furniture, Antiques & Collectibles for Daniels, Fils, Goodrich families and others. Auctioneers: Darwin West, Tom Frey, Todd Crill, Steve Bergren. Sun. Oct. 27- 11:00AM Lenox, IA. Furniture & Antiques, Tools, Toys & Misc. for Beverly Stream. Auctioneers: Jack Kretzinger, Dan Kretzinger, Tony Douglas. Advertise your auction in the CNA Classifieds and we will include it in our “Auction Calendar.” Cromwell Congregational United Church of Christ Annual God’s Portion Dinner and Sale Saturday, October 19, 2013 Donations from: Adams Street Espresso, Afton Co-Op, Akin, American Family Insurance, Bailey Family, Barker Implement, Buxton Spraying, Creston Automotive, Curves, Fareway, Farm & Home, Ferarra Candy Company, Fireside Bar & Grill-Diagonal, Hometown Insurance, Hy-Vee, John Judd/Delores Doench, Kelly’s Flowers, Mary Brunner, Maurices, Napa, Pampered Puppies, Pioneer, Schweser’s, Sharon Bennett, Shear Styling, Stalker Chevrolet, Sweet Desires-Corning, Teresa’s Floral, Titan Equipment, Tony Allen, True Value, Up In Smoke, Upper Crust Bakery, VanGelder Clothing, Wenzig Construction, and more to come. Cromwell Quilters: (1) Queen and (1) King size quilt Contact Sandy Allison Creston News Advertiser, 641-782-2141 x222 For a complete listing check out: Join our Health Care Team Join our Health Care Team Advanced Systems Engineer Clinic LPN/CMA Duties include, but are not limited to: placing patients in examination rooms, ascertaining pertinent medical history, obtaining vitals, preparing and prepping charts, collecting pertinent testing in advance, assisting in scheduling, and assisting provider as needed. Additional duties include backing up other nursing staff, backing up FullasTime - 36 hours per week lab and other duties assigned. Apply online at Full Time – 72 hours online at 300 W.Apply Hutchings St., Winterset, Iowa 50273 The Advanced Systems Engineer’s role is to ensure proper computer operation so that end users can accomplish business tasks. This includes actively resolving escalated end user help requests within established SLAs. Problem resolution may involve the use of diagnostic and help request tracking tools, as well as require that the individual giveFull in-person, Time - 36hands-on hours perhelp weekat the desktop level. Project management as needed. Apply online at Full Time – 80 hours online at 300 W.Apply Hutchings St., Winterset, Iowa 50273 MCHCS is an equal opportunity employer 300 W. Hutchings St., Winterset, Iowa 50273 MCHCS is an equal opportunity employer MCHCS is an equal opportunity employer MCHCS is an equal opportunity employer MCHCS HW Clinic LPNCMA MCHCS HW Advanced Systems Engineer 3x4 10/15 The Shopper; 10/16 The Madisonian Tim L 300 W. Hutchings St., Winterset, Iowa 50273 MAINTENANCE 3x4 MECHANIC 10/15 The Shopper; 10/16 The Madisonian Tim L Inserter Position Opening is currently accepting applications for MAINTENANCE MECHANICS for our 2nd and 3rd shift operations. Employee will be responsible for performing equipment repairs, inspections, information research, and general plant maintenance. Must have a minimum of 2 years previous maintenance experience in a manufacturing facility. Experience with baggers and scales is a plus. Familiarity with computers and strong electrical and mechanical knowledge/experience is required. Welding and fabrication experience a plus. Must be able to read schematics and have the ability to use test equipment including voltmeters and ampmeters. Candidates with PLC knowledge preferred. Must be willing and able to work overtime as needed including weekends. Excellent wage and benefit package. Send resume or apply in person to: Ferrara Candy Company Attention: Human Resources 500 Industrial Parkway, Creston, Iowa 50801 We are looking for someone to fill a full time position in our case management program. The program coordinates community services to help seniors remain in their own homes. • Bachelor’s degree in human services field; or currently licensed Registered Nurse; Can substitute up to two years of full-time work experience involving direct contact with people in overcoming social, economic, psychological or health problems in a human service field for two years of educational requirement. • Must be able to work well with others and possess good oral and written communication and organizational skills. Experience in working with the elderly, a knowledge of community resources and the ability to speak Spanish a strong plus. • Must have reliable transportation, valid driver’s license, auto insurance and be able to travel in and outside of our service area, as job requires. • Must have a good working knowledge of computers and common programs, (i.e. Word, Outlook, Excel and Access). • Successfully pass a Criminal background and Dependent Adult Abuse check before hire. This position offers life insurance, medical, paid vacation, paid holidays, sick leave and IPERS. Salary will be based on experience. Please send, fax or email your resumé to: Bazaar open at 2:30 p.m. (new start time) Lots of baked goods, candies, pies and canned goods. Handmade items to include: baby bath towels, teddy bears, aprons, tie quilts, walker/wheelchair bags, backpacks with doll in pocket. Come expecting lots more!! Dinner • 4:00 - 7:00 p.m. Roast Beef, Ham Balls and all the fixin’s adults $8.00 • age 5-11 $3.00 • under 5 free Auction • 6:30 p.m. (new start time) in Cromwell 9A 109 N. Elm Creston, IA 50801 Phone 641-782-4040 Fax 641-782-4519 Email: SMcDonald@connectionsaaa.org A Equal Opportunity Employer/EOE/AA RECEPTIONIST/ ACCOUNTING CLERK Creston Publishing Company, publishers of the Creston News Advertiser, The SW Iowa Advertiser, the Osceola Sentinel Tribune and The Advertiser, is seeking an Accounting! Circulation clerk -Receptionist to work in the front office. This is a 25-hour per week position in our Osceola, Iowa office. Responsibilities of the position include: daily cash drawer reconciliation and bank deposits, data entry of classified ads, circulation duties, customer service/front desk reception, mail sorting and mail delivery to the local post office. The successful candidate will have a high school diploma, or equivalent, with basic computer skills, valid drivers license and insurance, acceptable driving record and reliable vehicle. Must possess good communication skills, be highly accurate and detail oriented and able to handle a variety of tasks while adhering to daily deadlines. Previous accounting experience preferred. Typical hours are five days per week (M/T/W/T/F); 9 am to 2 pm, 25 hours per work week. The chosen candidate will be subject to pre-employment background, driving record checks and drug screening. Interested candidates please pick up an application or send their resume to: Creston Publishing Company Attention: Human Resources PO Box 126 503 W. Adams Street Creston, IA 50801 Creston Publishing Company is a Drug Free Employer. Must be able to pass a pre-employment drug screen, background check and driver’s check. This posting may not include all duties of the position. Creston Publishing Company has a part-time inserter position available in our mailroom. Must be able to work flexible hours, Monday through Friday. Job responsibilities include putting printed advertising materials into our papers, stacking papers, bundling papers, helping package all products for delivery and distribution. Dial-AService We are looking for a reliable team player. Must be able to move/lift 35-pound bundles of newspapers/preprints for extended periods of time. Position requires repetitive lifting, bending, and standing for extended periods. Position also requires the ability to work under deadline pressure and to follow detailed instructions. Interested candidates can please stop in and fill out an application or send their resume to: Creston Publishing Company Attn: Rose Henry PO Box 126 503 W. Adams Street Creston, IA 50801 Creston Publishing Company is a Drug Free Employer. Must be able to pass a pre-employment drug screen, background check and physical. No Phone Calls Please Equal Opportunity/Affirmative Action Employer. Backhoe Service Sow Farm Technician Job sites located near Diagonal, Lenox and Thayer This full-time position is responsible for the daily care, health, and maintenance of all animals at the worksite. Each technician is a vital member of a team of 10-12 people all dedicated to providing excellent animal care. This entry level opportunity provides hands-on experience in many of the following areas: animal movements, breeding and gestation, farrowing, recordkeeping and farm maintenance. The ideal candidate will have a desire to work with pigs, a willingness to learn, a high level of dependability and a solid work history. This position offers: • All necessary training and certifications • Base salary starting at $23,000 with potential for quarterly bonuses • Eligibility to apply for the Manager In Training program after six months employment • Full benefits: health, dental, vision, 401(k), Flex spending • Paid holidays, sick days and vacation • Adventureland and Iowa State Fair Family Days • Get hired and refer a friend — we have a $1,560 Employee Referral Bonus! Apply online at, call 641-347-5065 or stop by 101 North Douglas in Afton to complete an application. Iowa Select Farms is an equal opportunity employer. GAULE EXTERIoRS Steel and vinyl siding, replacement windows and seamless guttering. Quality craftsmanship, over a decade of professional service in Southwest Iowa. 641-782-0905. Specializing in sewer, water & tile Replacelines! New construction and repairs, wESTMAN wINDowS. Creston & Afton areas. Tom McGuire ment windows tilt for easy cleaning and rebates bays, bows, sliders, etc. 641-344-0530. Any custom size and shape, 30+ in Creston. I sell, service and Computer Repair years install, for no-pressure estimate call VINE STREET CoMPUTER SoLU- Charlie westman 641-782-4590 or TIoNS. 1205 North Vine Street, 641-344-5523. 641-780-5760 12 years experience. Reasonable & Quality PC bowMAN SIDING & wINDowS. All repair and tutoring. major brands of vinyl and steel siding, Traco and Revere thermal Consignment Store Heartland, replacement windows. Recipient of Too GooD To bE ThREw. the Revere Premium Renovator Award. 114 N. Maple, Creston, IA Mens, Seamless guttering and Leaf Relief Womens, Childrens Clothing & Home gutter covers. 33 years of continuous Decor. Tue.-Fri. 10AM-5:30PM, Sat. reliable service in Southwest Iowa, free estimates, 641-322-5160 9AM-2PM 515-473-1126 or 1-800-245-0337. Glass QUALITY GLASS Co. Automotive, Storage home, business and farm. Commercial lock service and ShARP’S SELF-SToRAGE Boats, trailer sales. hwy 34 East, in records, inventory, furniture. Creston 641-782-5155 You store it, lock it, take the key. Industrial Park, Creston, Plumber 641-782-6227. SChRoEDER PLUMbING and ELECTRICAL. Central air repair/ new installations, new breaker boxes, lighting fixtures, softeners, water heaters. Specialize in manufactured and mobile homes. Free estimates, licensed, insured, 641-202-1048. Accept Visa & Mastercard. Tree Service MINERS TREE SERVICE. Tree Removal, Trimming, Stump Grinding, fully insured. Free estimates. Justin Miner, 712-621-4847. 10A Creston News Advertiser Monday, October 14, 2013 the 4th Annual extravaganza holiday to holiday Thanksgiving | Christmas | New Years Cooking, Decorating, Style Show, Tastings and more! Monday, October 28, 2013 Southwestern Community College - Creston Doors Open | 4:30 PM Show Begins | 7:00 PM featuring Dennis “kyp” Kuyper kyp’s kitchen - CNA and Katie Routh, Chef Henry A. Wallace Country Life Center Many Food Samples at Vendor Booths Lots and Lots of Door Prizes! Booths will have items for Sale “CASH & CARRY” Style Show will feature clothing and accessories for the holidays from area retailers Tickets On Sale At The Following Businesses Vendor Booth Spaces Available Call 641-782-2141 ext 228 sponsored by the
https://issuu.com/shawmedia/docs/cna-10-14-2013
CC-MAIN-2016-50
refinedweb
15,121
74.9
Analysing the Data through visualization is the best way to understand them. It feels bored when you just look at the inferred numbers and not gain much information. It requires much time to spend for filtering out the outcomes and reasons. Therefore rather than extracting analysis from the numbers you should visualize the data using the different figures. In this post, you will learn how to create a bar chart from data frame using Plotly and how to integrate it with you Flask Web APP. Step by Step for creating a Bar Chart? Step 1: Import the necessary libraries The first step is to import the necessary libraries. I am using Plotly for plot in the chart and Flask for integrating flask with Plotly. Pandas for reading the CSV and manipulation of the excel. from flask import Flask,render_template import pandas as pd import numpy as np from charts.bar_chart import plot_chart import plotly.graph_objs as go import plotly.offline as plt Step 2: Load the Dataset. You will plot the chart for a real-life example. Therefore I have a dataset for the countries population, GDP e.t.c. You can download from here. Kaggle Countries of the World Dataset df = pd.read_csv("countries.csv") Step 3: Configure the Layout and the Data for the Plot. Plotly requires the data and layout for plotting. First, you will create a trace for the bar chart and then pass the x-axis and y-axis values you want to plot. In this case, I want x-axis as country name and y-axis GDP data. Use the following code trace = go.Bar(x=df["Country"][0:20], y=df["GDP ($ per capita)"]) layout = go.Layout(title="GDP of the Country", xaxis=dict(title="Country"), yaxis=dict(title="GDP Per Capita"), ) data = [trace1] fig = go.Figure(data=data, layout=layout) plt.plot(fig) Step 4: Integrate it with the Flask App. The above method is only for plotting the Chart offline. If you want to integrate this chart with the Flask App. then you have to dump the figure into JSON Object. As figures generated by the Ploty framework is in dictionary and list format. Here I am passing the JSON to the flask app with the variable name plot. Let’s create the Flask Route and add the above code inside it. Use the following code @app.route("/bar_chart") def bar_chart_plot(): df = pd.read_csv("countries.csv") trace1 = go.Bar(x=df["Country"][0:20], y=df["GDP ($ per capita)"]) layout = go.Layout(title="GDP of the Country", xaxis=dict(title="Country"), yaxis=dict(title="GDP Per Capita"), ) data = [trace1] fig = go.Figure(data=data, layout=layout) fig_json = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder) return render_template('charts.html', plot=fig_json) In the last line of the function, you will pass the HTML file name and plot variable for using it in an HTML file. Below is the code that will contain inside the HTML File. Create a “charts.html” file and put the following things. Plotly requires d3 js and plotly.js for showing charts to the html. Therefore I have added both scripts inside the head section. In the body part, I am calling the Ploty.plot() javascript method for creating the bar chart for the data and layout we get as the Json format. <head> <script src=””></script> </head> <body> </body> When you run the Flask App you will get the output like this.
https://www.datasciencelearner.com/how-to-create-a-bar-chart-from-a-dataframe-in-python/
CC-MAIN-2020-29
refinedweb
575
69.58
Java Expert Solutions Chapter 37 Creating Multi-User Programs in Java by Mark Wutka - Designing Multi-User Applications - Adding Socket-Based Access to Multi-User Applications - Other Issues When Dealing with Sockets - Adding RMI Access to Multi-User Applications One thing that attracts thousands of people to the Internet is its interactive nature. The popularity of multi-user chat programs like IRC and various multi-user games like MUDs (Multi-User Domain/Dungeon/Dimension) illustrates that fact very clearly. In the beginning, multi-user programs were all text-based. There are many early multi-user programs that predate the Internet. Many multi-user programs are still text-based, but they are beginning to get graphical front ends (another form of encapsulation!). Other programs have grown out of single-user versions. Game manufacturers, for instance, have begun to support Internet connections. This allows game users to play against each other over the Internet. Java adds something that these off-the-shelf games don't really have. You can download a Java game and play it on any Java-enabled platform immediately. You can even create a game server that manages the connections between players. Whenever you add new games to the server, the players download new Java applets that present the user interface for the new games. The multi-user paradigm isn't restricted to games, of course. You can set up various kinds of collaborative applications, so people can solve problems and complete tasks from separate parts of the world. Designing Multi-User Applications A multi-user application is a slight variation on the typical client/server application. The only difference is that information passes from one client through the server to other clients. On a typical client/server application, information flows only from the client to the server and back. Figure 37.1 illustrates this difference. Figure 37.1 : Information f. between users a multi-user application. When you design a multi-user application, you should try to ignore the network if possible. You can't fully discount the network, of course. You have to remember that there is a high amount of overhead between the client and the server. You want to minimize the number of interactions between the client and server. When you create your application, you first create the server, and a client interface if needed. Next, you create encapsulations for the various network protocols and remote object systems you want to support. Figure 37.2 shows an example configuration, where the server can be accessed through TCP sockets and RMI. Figure 37.2 : Through encapsulation, your application can support multiple protocols. Listing 37.1 shows a server for a simple chat system. The server relays chat messages to the other users, and notifies the users whenever a new client enters the system or an existing client leaves. Listing 37.1 Source Code for ChatServer.java package chat.server; import java.util.Vector; import java.util.Hashtable; import java.util.Enumeration; // This is a simple chat application. It allows clients to enroll // under a particular name, and send messages to each other. // Messages are sent to the client via the ChatClient interface. public class ChatServer { // clients is a table that maps a client name to a ChatClient // interface protected Hashtable clients; public ChatServer() { clients = new Hashtable(); } // Add client adds a new client to the system and tells the other // clients about the new client. public synchronized void addClient(String name, ChatClient client) { // If the client picks a name that is already here, // disconnect the new client, let the old one keep its name. if (clients.get(name) != null) { client.disconnect(); return; } // Add the new client to the table clients.put(name, client); // Tell the other clients about this new client sendEnterMessage(name); } public synchronized void removeClient(String name) { ChatClient client = (ChatClient) clients.get(name); if (client != null) { clients.remove(name); sendLeaveMessage(name); } } // removeClient removes a client from the chat system and tells // the other clients about it. public synchronized void removeClient(ChatClient client) { // We remove by ChatClient, not by name. We have to enumerate through // all the clients to find out the name of this client. Enumeration e = clients.keys(); while (e.hasMoreElements()) { String key = (String) e.nextElement(); // If we found the right name for this client, remove them and // tell everyone about it. if (clients.get(key) == client) { clients.remove(key); sendLeaveMessage(key); } } } // sendChat is called by a client to send a message to the // other clients public synchronized void sendChat(String name, String message) { Enumeration e = clients.elements(); // Enumerate through all the clients and send them the chat message // Note that this will send a message back to the original // sender, too. while (e.hasMoreElements()) { ChatClient client = (ChatClient) e.nextElement(); client.incomingChat(name, message); } } // sendEnterMessage tells all the clients when a new client // has arrived public synchronized void sendEnterMessage(String name) { Enumeration e = clients.elements(); // Enumerate through all the clients and tell them about // the new client while (e.hasMoreElements()) { ChatClient client = (ChatClient) e.nextElement(); client.userHasEntered(name); } } // sendLeaveMessage tells all the clients that a client has left public synchronized void sendLeaveMessage(String name) { Enumeration e = clients.elements(); // Enumerate through all the clients and tell them who left while (e.hasMoreElements()) { ChatClient client = (ChatClient) e.nextElement(); client.userHasLeft(name); } } // getUserList returns a list of all the users on the system public synchronized String[] getUserList() { Enumeration e = clients.keys(); // Create an array to hold the user names String[] nameList = new String[clients.size()]; // Copy the user names into the nameList array int i = 0; while (e.hasMoreElements()) { nameList[i++] = (String) e.nextElement(); } // Return the name list return nameList; } } Since this server needs to invoke methods on the client, it defines a ChatClient interface that all clients to this system must implement. Listing 37.2 shows this ChatClient interface. Listing 37.2 Source Code for ChatClient.java package chat.server; public interface ChatClient { public void incomingChat(String who, String chat); public void userHasEntered(String who); public void userHasLeft(String who); public void disconnect(); } Again, it is important to note that there is no mention of a specific networking protocol. These two classes represent the core application. If you design all your applications this way, you will have no trouble adding other ways to access your application. Adding Socket-Based Access to Multi-User Applications Once you have created an application, you can put a socket-based front end on it, allowing clients to access it over the network. Sockets are a low-level means of communication, and are simple to set up. Sockets are good for sending streams of bytes over the network. For sending messages, however, you have to do a bit more work. It is very easy to create a socket-based server. First, you create a ServerSocket object that listens for incoming connections. Next, you use the accept method to wait for incoming connections. The accept method returns an instance of a Socket class, which represents the connection to the new client. After that, you can use getInputStream and getOutputStream to get streams to reading from and writing to the new client. Creating a Socket-Based Server The socket-based server is a separate class from the original application class. The socket server is just a setup-man. It accepts new socket connections and then creates objects that interact with the real application and pass the results back to the socket-based client. The socket-based server itself never interacts with the application server. Figure 37.3 illustrates a socket-based client connecting to the socket server. Figure 37.3 : A socket-based client connects to the socket server. Next, the server creates a socket-based client object, which implements the application's client interface. The socket server also tells the new client object where to find the application object. Figure 37.4 illustrates this step. Figure 37.4 : The socket server creates a socket-based client object to handle the connection. Finally, the socket-based client object interacts with the application, passing information over the socket connection to the user on the other end. Figure 37.5 shows this interaction. Figure 37.5 : The socket-based client communicates directly with the application. Listing 37.3 shows a very basic TCP socket server that creates client objects to do the real dirty work. Listing 37.3 Source Code for TCPChatServer.java package chat.tcp.server; import java.net.*; import java.io.*; import chat.server.*; // This class implements a simple TCP server that listens // for incoming connections. It creates a TCPChatClient object // to handle the actual connection. public class TCPChatServer extends Object implements Runnable { // serverSocket is the socket we are listening on protected ServerSocket serverSocket; // server is a reference to the application object, which we // pass to the TCPChatClients protected ChatServer server; protected Thread myThread; public TCPChatServer(ChatServer server, int port) throws IOException { serverSocket = new ServerSocket(port); this.server = server; } public void run() { while (true) { try { // Accept a new connection Socket newConn = serverSocket.accept(); // Create a client to handle the connection TCPChatClient newClient = new TCPChatClient( server, newConn); // Start the client (it's runnable) newClient.start(); } catch (Exception e) { } } } public void start() { myThread = new Thread(this); myThread.start(); } public void stop() { myThread.stop(); myThread = null; } } Sending Messages over Sockets The client handler is where the real work is done. Sending messages over a TCP socket is a tricky matter. There are no message boundaries in TCP; it's just a stream of bytes. This means that if you write 50 bytes to the stream, the program on the other end might read it as two groups of 25 bytes, or 50 single bytes. There are two ways to approach this problem. One way is to have the client and server know what kind of data is being sent at any time and have them read that correct number of bytes. Typically, you would send a message type followed by the message bytes. The program reading the information would determine the length and content of the data based on the message type. The other solution is to put a message length in front of any message you send over the socket. For example, if you want to send 223 bytes, you write out 223 as a 4-byte integer value, followed by the 223 bytes of data. The program on the other end reads the 4-byte length and then sees that the length is 223 bytes. There are advantages and disadvantages to each approach. When you send messages as arrays of bytes, you have to take the extra step of putting the data into the array, rather than writing it directly to the socket. If you determine the length of the data based on context, you have to ensure that both ends of the connection are using the exact same format. In other words, if one side sends a message containing an integer and a string, the other side had better be expecting an integer and a string. If you send a message type that the other side doesn't understand, it can never recover. It has no idea how many bytes there are in the message data. The TCPChatClient shown in Listing 37.4 combines both of these methods. It sends a 4-byte message type, followed by the data length, and then the data. It uses DataInputStream and DataOutputStream filters on top of the socket connections so it can write different data types easily. The TCPChatClient class implements the ChatClient interface that the ChatServer class uses to send a message to a particular client. For each different protocol you support in a chat server, you will have a different class that implements the ChatClient interface. Listing 37.4 Source Code for TCPChatClient.java package chat.tcp.server; import java.io.*; import java.net.*; import chat.server.*; import chat.tcp.common.TCPChatMessageTypes; // This class acts like a client of the ChatServer application. It // translates messages from a TCP socket into requests for the chat // server, and translates method invocations from the server into // TCP messages. public class TCPChatClient extends Object implements ChatClient, Runnable { // server is the ChatServer application we are a client of protected ChatServer server; // clientSock is the socket connection to the user protected Socket clientSock; // inStream and outStream are Data streams for the socket. This allows // us to send information in forms other than an array of bytes protected DataInputStream inStream; protected DataOutputStream outStream; // clientName is the name the user wants to be known by protected String clientName; protected Thread myThread; public TCPChatClient(ChatServer server, Socket clientSock) throws IOException { this.server = server; this.clientSock = clientSock; // get data streams to the socket inStream = new DataInputStream( clientSock.getInputStream()); outStream = new DataOutputStream( clientSock.getOutputStream()); // The first thing that the user sends us is // the name they want to use clientName = inStream.readUTF(); // Add ourself to the server application server.addClient(clientName, this); } // The next few methods implement a really simple messaging protocol: // 4 byte Integer message type // 4 byte message length // <message length> bytes of data // userHasEntered is called by the server whenever there's a new user // the data part of the message is just the name of the user who has // entered. public void userHasEntered(String who) { try { // Write the message type outStream.writeInt(TCPChatMessageTypes.ENTER); // Write the message length outStream.writeInt(who.length()); // Write the user's name outStream.writeBytes(who); } catch (Exception e) { server.removeClient(this); } } // userHasLeft is called by the server whenever there's a new user // the data part of the message is just the name of the user who has // left. public void userHasLeft(String who) { try { outStream.writeInt(TCPChatMessageTypes.LEAVE); outStream.writeInt(who.length()); outStream.writeBytes(who); } catch (Exception e) { server.removeClient(this); } } // incomingChat is called by the server whenever someone sends a message. // The data part of the message has three parts: // the length of the name of the person sending the message (the // length value itself is a 4-byte integer) // the name of the person sending the message // the chat message public void incomingChat(String who, String chat) { try { outStream.writeInt(TCPChatMessageTypes.CHAT); outStream.writeInt(who.length() + chat.length() + 4); outStream.writeInt(who.length()); outStream.writeBytes(who); outStream.writeBytes(chat); } catch (Exception e) { server.removeClient(this); } } // disconnect is called by the server when the client has // been disconnected from the server. We just close down the // socket and stop this thread. public void disconnect() { try { clientSock.close(); } catch (Exception e) { } stop(); } The rest of the TCPChatClient class deals with messages coming in from the client. The run method reads in an integer message type as the first part of the message. It then calls an appropriate method to handle the rest of the message. The handleChatMessage method reads an incoming chat message and passes it on to the server to be distributed to the rest of the clients. Because this protocol is extremely simple, there are no other message types defined. Because you may want to add protocol types at some point, the server should be able to receive messages it does not understand without completely dying. In this case, because the length of the message is always sent after the message type, the skipMessage method can read in and then ignore any message that the server doesn't understand. You should always provide some sort of safety mechanism like this. Someone may take this server and really expand it and then write a nice client for it. If that client then accesses an original version of the server, it should still be able to safely use the original version without the server dying. If you decide to change the contents of a particular message, you should assign that message a new message type and continue to support the old type. If you added a date field to the incoming chat message, you can't expect all the clients to suddenly support the new field. You should be able to handle incoming chat messages with or without the date field. One of the best ways to handle this is by adding a second message type. Version numbers are another common device used for handling multiple formats for a particular message. When the client connects to the server, it tells the server which version of the messaging protocol it uses. If it uses version 2, for instance, it will be sending a date field in every chat message, while version 1 clients don't send the date field (see Listing 37.5). Listing 37.5 Source Code for TCPChatClient.java (continued) // handleChatMessage reads an incoming chat message from the user and // sends it to the server. The data part of the message is just the // chat message itself. public void handleChatMessage() throws IOException { // Get the message length int length = inStream.readInt(); byte[] chatChars = new byte[length]; // Read the chat message inStream.readFully(chatChars); String message = new String(chatChars, 0); // Send the chat message to the server server.sendChat(clientName, message); } // If we get a message we don't understand, skip over it. That's // why we have the message length as part of the protocol. public void skipMessage() throws IOException { int length = inStream.readInt(); inStream.skipBytes(length); } public void run() { while (true) { try { // Read the type of the next message int messageType = inStream.readInt(); switch (messageType) { // If it's a chat message, read it case TCPChatMessageTypes.CHAT: handleChatMessage(); break; // For any messages whose type we don't understand, skip the message default: skipMessage(); return; } } catch (Exception e) { server.removeClient(clientName); return; } } } public void start() { myThread = new Thread(this); myThread.start(); } public void stop() { myThread.stop(); myThread = null; } } The TCPChatClient class uses message types defined in an interface called TCPChatMessageTypes, which is shown in Listing 37.6. Listing 37.6 Source Code for TCPChatMessageTypes.java package chat.tcp.common; public interface TCPChatMessageTypes { public static final int CHAT = 1; public static final int ENTER = 2; public static final int LEAVE = 3; } The user-side client program is pretty simple to write. It needs to connect to the TCPChatClient and pass chat messages to it. It must also read any messages sent by the server. Since the user-side client is reading from two different places, it needs at least two threads. The RunTCPClient class, shown in Listing 37.7, uses a second class called TCPChatReader to read messages coming from the TCPChatServer. The TCPChatReader class calls methods in RunTCPClient to actually display the results of a message from the server. In this simple example, the RunTCPClient class just prints the messages to System.out. If you were making a chat applet, however, you would display incoming messages differently. You could still use the TCPChatReader with a chat applet. Listing 37.7 Source Code for RunTCPClient.java import java.net.*; import java.io.*; import chat.server.*; import chat.tcp.common.TCPChatMessageTypes; import chat.tcp.client.*; // Class is a client for the TCPChatServer object. It reads chat // messages from System.in and relays them to the chat server. // It displays any information coming back from the chat server. public class RunTCPClient extends Object implements ChatClient { public RunTCPClient() { } // Display a message when there's a new user public void userHasEntered(String who) { System.out.println("--- "+who+" has just entered ---"); } // Display a message when someone exits public void userHasLeft(String who) { System.out.println("--- "+who+" has just left ---"); } // Display a chat message public void incomingChat(String who, String chat) { System.out.println("<"+who+"> "+chat); } public void disconnect() { System.out.println("Chat server connection closed."); System.exit(0); } public static void main(String args[]) { int port = 4321; // Allow the port to be set from the command line (-Dport=4567) String portStr = System.getProperty("port"); if (portStr != null) { try { port = Integer.parseInt(portStr); } catch (Exception ignore) { } } // Allow the server's host name to be specified on the command // line (-Dhost=myhost.com) String hostName = System.getProperty("host"); if (hostName == null) hostName = "localhost"; Listing 37.7 Continued try { // Connect to the TCPChatServer program Socket clientSocket = new Socket(hostName, port); DataOutputStream chatOutputStream = new DataOutputStream( clientSocket.getOutputStream()); DataInputStream chatInputStream = new DataInputStream( clientSocket.getInputStream()); DataInputStream userInputStream = new DataInputStream(System.in); System.out.println("Connected to chat server!"); // Prompt the user for a name System.out.print("What name do you want to use? "); System.out.flush(); String myName = userInputStream.readLine(); // Send the name to the server chatOutputStream.writeUTF(myName); RunTCPClient thisClient = new RunTCPClient(); // Start up a reader thread that reads messages from the server TCPChatReader reader = new TCPChatReader( thisClient, chatInputStream); reader.start(); // Read input from System.in while (true) { String chatLine = userInputStream.readLine(); sendChat(chatOutputStream, chatLine); } } catch (Exception e) { System.out.println("Got exception:"); e.printStackTrace(); System.exit(1); } } // sendChat sends a chat message to the TCPChatServer program public static void sendChat(DataOutputStream outStream, String line) throws IOException { outStream.writeInt(TCPChatMessageTypes.CHAT); outStream.writeInt(line.length()); outStream.writeBytes(line); } } The TCPChatReader class reads messages from the chat server. Rather than display the messages itself, it invokes methods in another object. This enables you to customize the display of information without changing the TCPChatReader class. Listing 37.8 shows the TCPChatReader class. Listing 37.8 Source Code for TCPChatReader.java package chat.tcp.client; import java.io.*; import chat.server.*; import chat.tcp.common.TCPChatMessageTypes; // This class sets up a thread that reads messages from the // TCPChatServer and then invokes methods in an object // implementing the ChatClient interface. public class TCPChatReader extends Object implements Runnable { protected ChatClient client; protected DataInputStream inStream; protected Thread myThread; public TCPChatReader(ChatClient client, DataInputStream inStream) { this.client = client; this.inStream = inStream; } public void run() { while (true) { try { int messageType = inStream.readInt(); // Look at the message type and call the appropriate method to // read the message. switch (messageType) { case TCPChatMessageTypes.CHAT: readChat(); break; case TCPChatMessageTypes.ENTER: readEnter(); break; case TCPChatMessageTypes.LEAVE: readLeave(); break; default: skipMessage(); break; } } catch (Exception e) { client.disconnect(); } } } public void start() { myThread = new Thread(this); myThread.start(); } public void stop() { myThread.stop(); myThread = null; } // readChat has the toughest job in reading the message, and it's not // really that tough. The message length is the total length of the // bytes sent. It is followed by the length of the name of the person // sending the chat, and then the name itself. This method has to // compute the length of the chat string by subtracting the length of // the name, and 4 bytes for the name length. public void readChat() throws IOException { // Get the total message length int length = inStream.readInt(); // Get the length of the name of the person sending the chat int whoLength = inStream.readInt(); // Compute the length of the chat, subtract the length of the name, // and 4 bytes for the length that was sent. int chatLength = length - whoLength - 4; // Read in the name of the person sending the chat byte[] whoBytes = new byte[whoLength]; inStream.readFully(whoBytes); String whoString = new String(whoBytes, 0); // Read in the chat byte[] chatBytes = new byte[chatLength]; inStream.readFully(chatBytes); String chatString = new String(chatBytes, 0); // Pass the chat to the object that will display it client.incomingChat(whoString, chatString); } public void readEnter() throws IOException { int length = inStream.readInt(); byte[] whoBytes = new byte[length]; inStream.readFully(whoBytes); String whoString = new String(whoBytes, 0); client.userHasEntered(whoString); } public void readLeave() throws IOException { int length = inStream.readInt(); byte[] whoBytes = new byte[length]; inStream.readFully(whoBytes); String whoString = new String(whoBytes, 0); client.userHasLeft(whoString); } public void skipMessage() throws IOException { int length = inStream.readInt(); inStream.skipBytes(length); } } Other Issues When Dealing with Sockets When you write socket-based servers, you have to take care of all the problems that RMI and CORBA normally take care of. For instance, if a client has a very slow network link, you may have threads that start blocking when trying to write to the client. This can cause the server to appear hung for some users. Just as you created a thread to read from a client, you can create a thread to write to a client. You can then create a pipe stream for sending data to the write thread. The write thread would read data from the pipe and write it to the client's socket connection. You also have the problem of deciding when a user's connection is hung. Usually when a client disappears, the socket connection closes. Sometimes, however, the network never receives a message to close down the connection. You may be queuing up data for a client that will never read it. One way to solve this problem is to keep track of how long a write thread has been trying to write data to a client. The write thread sets a flag indicating that it is trying to write and stores the current time before calling the write method. You then create a thread that runs in the background checking all the write threads. If it finds a thread that is trying to write and it has been trying to write for a certain time period (maybe 10-15 minutes), it closes down the connection to the client. Adding RMI Access to Multi-User Applications You don't have to implement too many complex client/server applications using sockets before you wish for something better. It is a huge hassle to send messages over a socket manually. You must either write some libraries to help you, or better yet, use a system that takes care of messaging for you. RMI and CORBA fit this bill perfectly. When you create server encapsulations with RMI and CORBA, you have to set things up a little differently. The TCP server created client objects that actually handled the connection. In this design, the TCP server is acting like a factory; it produces the objects that handle the connections. The factory model for a TCP server is somewhat automatic, because the ServerSocket class behaves like a factory of Socket objects. When you establish a connection using RMI or CORBA, you connect directly to an object. There is no new object created on the server side. This makes it a little more difficult to create multiple-client objects. You can solve this pretty easily by creating an object that creates connection handling objects. A client would enroll to this factor object, as illustrated in Figure 37.6. Figure 37.6 : The client enrolls to the factory object. The factory object then creates a new connection handling object and passes the enrolled client a reference to the new connection handler. The client and the connection handler now communicate directly; the factory object is no longer involved. Figure 37.7 illustrates this relationship. Figure 37.7 : The factory creates a connection handler object that communicates with the client. Listing 37.9 shows an RMI interface definition for a simple factory object. Listing 37.9 Source Code for RMIChatEnrol.java package chat.rmi; public interface RMIChatEnrol extends java.rmi.Remote { public RMIChatServer enrol(String name, RMIChatClient client) throws java.rmi.RemoteException; } Listing 37.10 shows the RMI implementation for this factory. It simply identifies itself to the RMI registry (the RMI naming service) and then creates new RMIChatServerImpl objects in response to an enroll request from a client. Listing 37.10 Source Code for RMIChatEnrolImpl.java package chat.rmi; import java.rmi.server.UnicastRemoteServer; import java.rmi.server.StubSecurityManager; import chat.server.*; // This class is a factory for RMIChatServerImpl objects. Whenever // a client enrolls, it creates a new RMIChatServerImpl and returns // it to the client. public class RMIChatEnrolImpl extends UnicastRemoteServer implements RMIChatEnrol { ChatServer server; public RMIChatEnrolImpl(ChatServer server) throws Exception { this.server = server; // Find out what name this object should use in the RMI registry String name = System.getProperty("rmiName", "chat"); // Identify this object to the registry java.rmi.Naming.rebind("chat", this); } public RMIChatServer enrol(String name, RMIChatClient client) throws java.rmi.RemoteException { // Create a new RMIChatServerImpl and return it to the client return new RMIChatServerImpl(server, name, client); } } Once the connection handler is created, it needs to be able to communicate with the client, and the client needs to communicate back. Under RMI, this requires two more interfaces. Listing 37.11 shows the RMIChatClient interface, which is implemented by the client. The connection handler calls methods in RMIChatClient in response to method calls from the chat application. Listing 37.11 Source Code for RMIChatClient.java package chat.rmi; public interface RMIChatClient extends java.rmi.Remote { public void incomingChat(String who, String chat) throws java.rmi.RemoteException; public void userHasEntered(String who) throws java.rmi.RemoteException; public void userHasLeft(String who) throws java.rmi.RemoteException; public void disconnect() throws java.rmi.RemoteException; } The RMIChatServer interface is implemented by the connection handler. The client invokes the sendChat method in this interface to send a chat message to the chat application. Listing 37.12 shows the RMIChatServer interface. Listing 37.12 Source Code for RMIChatServer.java package chat.rmi; public interface RMIChatServer extends java.rmi.Remote { public void sendChat(String chat) throws java.rmi.RemoteException; public void disconnect() throws java.rmi.RemoteException; } Unlike the complex TCPChatClient class, the RMIChatServerImpl class is extremely straightforward. It doesn't have to cram messages down a socket, and it doesn't have to interpret any data. All it does is invoke methods on the remote client or on the chat application. Listing 37.13 shows the RMIChatServerImpl class. Listing 37.13 Source Code for RMIChatServerImpl.java package chat.rmi; import java.rmi.server.UnicastRemoteServer; import java.rmi.server.StubSecurityManager; import chat.server.*; // This class is actually an RMI encapsulation for the // ChatClient interface. It implements the methods in the // ChatClient interface and invokes the corresponding method // in the RMIChatClient interface. // It also handles messages coming from the client. When the // sendChat method is invoked via RMI, it turns around and // invokes sendChat in the chat application. public class RMIChatServerImpl extends UnicastRemoteServer implements RMIChatServer, ChatClient { protected ChatServer server; protected String name; protected RMIChatClient client; public RMIChatServerImpl(ChatServer server, String name, RMIChatClient client) throws java.rmi.RemoteException { this.server = server; this.name = name; this.client = client; server.addClient(name, this); } public void incomingChat(String who, String chat) { try { client.incomingChat(who, chat); } catch (Exception e) { try { client.disconnect(); } catch (Exception ignore) { } server.removeClient(name); client = null; } } public void userHasEntered(String who) { try { client.userHasEntered(who); } catch (Exception e) { try { client.disconnect(); } catch (Exception ignore) { } server.removeClient(name); client = null; } } public void userHasLeft(String who) { try { client.userHasLeft(who); } catch (Exception e) { try { client.disconnect(); } catch (Exception ignore) { } server.removeClient(name); client = null; } } public void disconnect() { try { client.disconnect(); } catch (Exception ignore) { } server.removeClient(name); client = null; } public void sendChat(String chat) throws java.rmi.RemoteException { server.sendChat(name, chat); } } The actual client program that you run is very simple, too. Unlike the TCP program, it doesn't need to spawn a separate thread, since RMI is running as a separate thread. The program can concentrate on reading input from the user. Listing 37.14 shows the RMIChatClientImpl object, which is the actual application that a user would run. Listing 37.14 Source Code for RMIChatClientImpl.java import java.net.*; import java.io.*; import java.rmi.server.UnicastRemoteServer; import java.rmi.server.StubSecurityManager; import chat.server.*; import chat.rmi.*; // This class is an RMI client for the chat application public class RMIChatClientImpl extends UnicastRemoteServer implements RMIChatClient { public RMIChatClientImpl() throws java.rmi.RemoteException { } // The following 4 methods are callbacks from the // RMIChatServerImpl class. public void userHasEntered(String who) throws java.rmi.RemoteException { System.out.println("--- "+who+" has just entered ---"); } public void userHasLeft(String who) throws java.rmi.RemoteException { System.out.println("--- "+who+" has just left ---"); } public void incomingChat(String who, String chat) throws java.rmi.RemoteException { System.out.println("<"+who+"> "+chat); } public void disconnect() throws java.rmi.RemoteException { System.out.println("Chat server connection closed."); System.exit(0); } public static void main(String args[]) { // Get the name of the enroll factory String chatName = System.getProperty("rmiName", "chat"); // Must have a stub security manager! System.setSecurityManager(new StubSecurityManager()); try { // Get the name the user wants to use System.out.print("What name do you want to use? "); System.out.flush(); DataInputStream userInputStream = new DataInputStream(System.in); String myName = userInputStream.readLine(); // Create an instance of this object to receive callbacks RMIChatClient thisClient = new RMIChatClientImpl(); // Locate the RMIChatEnrol object RMIChatEnrol enrol = (RMIChatEnrol) java.rmi.Naming.lookup(chatName); // Enrol to the chat system RMIChatServer server = enrol.enrol(myName, thisClient); // Free up the enrol object, we don't need it any more enrol = null; // Read lines from the user and pass them to the server while (true) { String chatLine = userInputStream.readLine(); server.sendChat(chatLine); } } catch (Exception e) { System.out.println("Got exception:"); e.printStackTrace(); System.exit(1); } } } All you need now is a class to start up the chat application and set up the TCP and RMI front ends for the application. Because the application implementation is separate from the networking protocols, you can run both TCP and RMI interfaces to a single chat application. This means that RMI users and TCP users can talk together. Listing 37.15 shows the RunServer class that starts up everything. Listing 37.15 Source Code for RunServer.java import chat.tcp.server.TCPChatServer; import chat.server.ChatServer; import chat.rmi.*; import java.rmi.server.StubSecurityManager; // This class starts up the chat application and the TCP and RMI // front ends. public class RunServer { public static void main(String[] args) { try { // Start the chat application ChatServer server = new ChatServer(); int port = 4321; String portStr = System.getProperty("port"); if (portStr != null) { try { port = Integer.parseInt(portStr); } catch (Exception ignore) { } } System.setSecurityManager(new StubSecurityManager()); // Start the RMI server RMIChatEnrol rmiEnrol = new RMIChatEnrolImpl( server); // Start the TCP server TCPChatServer tcpServer = new TCPChatServer( server, port); tcpServer.start(); } catch (Exception e) { System.out.println("Got exception starting up:"); e.printStackTrace(); } } } You should be able to use these classes as a starting point for any multi-user application you want to write. Always remember, however, to keep the application separated from the network protocols.
http://www.webbasedprogramming.com/Java-Expert-Solutions/ch37.htm
crawl-002
refinedweb
5,618
58.89
Opened 12 years ago Closed 12 years ago Last modified 10 years ago #1169 closed enhancement (wontfix) [patch] Render a template to a file on disk (loader.render_to_disk) Description Here's a patch that allows one to render (output, save) a template to a file on disk. Use it in a similar way as you would use loader.render_to_string(). render_to_file loads the given template_name and renders it, with the given dictionary as context, to a file on disk. The template_name may be a string to load a single template using get_template, or it may be a tuple to use select_template to find one of the templates in the list. file_path defines the file that will be written to disk. mode defaults to 'w' which will create the file if it doesn't exist and will overwrite the file if it does exist. Here's an example use: from django.core.template import loader path = '/path/to/my/file.txt' template = 'django/path/to/template/name' loader.render_to_file(template_name=template, file_path=path, dictionary={'foo': self.get_foo(), 'bar': Bar(),}) Attachments (1) Change History (3) Changed 12 years ago by comment:1 Changed 12 years ago by What's the use case for this, and why is it necessary to include this functionality in Django proper, when it can be emulated in two lines of Python code using the normal render() method? comment:2 Changed 12 years ago by Closing this ticket as a wontfix because my question wasn't answered. Patch to render a template to file on disk
https://code.djangoproject.com/ticket/1169
CC-MAIN-2017-34
refinedweb
257
63.09
problem in insert query - JSP-Servlet problem in insert query Hi! I am using this statement for data insertion into table but data is not moving only null passed and stored:- String insertQuery= "insert into volunteer_profiles ( name, gender ) values ( 'name insert query in jsp insert query in jsp give me insert code in jsp at run time Call Procedure in Query. insert into stu_table values(1, 'Komal',10... Call Procedure  ... and task. Stored Procedure can be compiled and executed Insert Data in Table Using Stored Procedure Insert Data in Table Using Stored Procedure  ...;); 7.Step to call procedure: We are using CallableStatement to execute a stored... into a table using stored procedure. Steps: 1.Create database: To create Stored Procedures and Functions reissuing the number of individual SQL statements. Stored procedure can call another...) mysql> delimiter ; mysql> CALL Sp1(@Name,@City,1); Query OK, 0 rows affected...) mysql> delimiter ; mysql> CALL Proc(@Name,@City,8); Query OK, 0 rows affected Java Multiple Insert Query example that demonstrates how to execute the multiple sql insert query in JSP...Java Multiple Insert Query In this example we will discuss about how to execute multiple insert query in Java. This example explains you about how JPA 2.1 Stored Procedure Example JPA 2.1 Stored Procedure Example - Lear how to call stored procedures from... in MySQL database and then call the stored procedure in your JPA 2.1 based... for programmers. You can easily call stored procedure from your Java how insert data from databse in array and compare new data to already stored value in array how insert data from databse in array and compare new data to already stored... to stored it in an array and i want to compare new value that come from database with array if this new value is already stored in this array i like to executed insert into statement in sql using servlets insert a value from a html form in the table stored in the database. ... we are going to insert the values. Now make one jsp page or html page...() method of the request object. Pass a query to insert the values retrieved from single query to insert the data single query to insert the data How to insert data for all HTML fields in single MYSQL query insert query in mysql insert query in mysql insert query in mysql not working [INSERT INTO Birthdays(firstname, lastname, birthday, group) VALUES('Sam','Smith','June','Junior insert values - JSP-Servlet insert values How to insert values in the oracle database using JSP... page<html><head><title>Insert value in database</title><...;100%"> <h1><center>Insert value in Database< image upload and stored in database - JSP-Servlet image upload and stored in database How can i upload a image and store that image in a database Hi Friend, Try the following code...("insert into file(file_data) values(?)"); //psmnt.setString(1,saveFile); fis Call Procedure specified in Query. insert into stu_table values(1... Call Procedure  ... unit and task. Stored Procedure can be compiled and executed jsp to access query jsp to access query How to insert the values from jsp to access ? Here is a jsp code that insert the values to MS access database...) Restart your server and run your jsp code. <%@page import="java.sql.*"%> < Write a query to insert a record into a table Write a query to insert a record into a table Write a query to insert a record into a table Hi, The query string is as follows- Insert into employee values ('35','gyan','singh'); Thanks How to call servlet in JSP? How to call servlet in JSP? How to call servlet in JSP Query Query how can i set path and Classpath or Environmental Variable for jsp/servlet program to compile and run Query input. i want to call repeatedly but first time i got value. but when i called query query Create the following tables and insert 3 appropriate records in each table: Employee(empid, empname, empstreet, empcity) Works(empid, companyid, salary) Locatedin(companycode, compname, compcity) Find the name of all query query if i want to insert 1labelbox,1textbox,1button in 1st row and in second row 1label box and a text box ly how can i align it? Hi Friend, Try the following code: import java.awt.*; import javax.swing.*; import Query Query How can i call a static variable of one class to another class. can i change static varibale value. //A.java public class A { static JTextField p=null; static int k; public A() { } public actionPerformed(ActionEvent ae query query how to delete checked record into database using checkbox in jsp We are providing you the code where we have specified only three fields bookid,author and title in the database. 1) Create book.jsp < tomcat jsp call DLL tomcat jsp call DLL Where can i find Example of use JSP under Tomcat environment call DLL insert data into database the data into the jsp page that data stored into the database.Here the error...insert data into database hi,thanks for reply just i am doing student information project,frontend is jsp-servlet and backend is msaccess2007. i call ireports call ireports how to call jrxml file in jsp or servlets pls give one example any one? please send one example any one sql/xml query in jsp - JSP-Servlet sql/xml query in jsp Sir Here is my code which is not working Testing JSP select id from "SYSTEM..." name:PROFILE columns:id (bigint) and INFO (xml) data stored:id=1 and info=nitin Insert Operation - WebSevices +dbName,userName,password); String query = "insert into employee set employee...(); int i = st.executeUpdate(query); if(i>0) { out.println("insert...Insert Operation Send the code for Insert Contact in pop up window PHP SQL Query Insert PHP SQL Query Insert This example illustrates how to execute insert query in php...). Now, queries can be executed using mysql_query() method to insert values image insert - Java Beginners image insert i browsed the image by clicking d browse button.if i press submit button means d image automatically stored in db.then i have 2.... ---------------------------- Read for more information, iBatis Stored Procedure Example ;/sqlMap> Now we can call this stored procedure as : sqlMap.queryForList... iBatis Stored Procedure Example  ... can use inline insert , delete, update SQL commands on our database table image stored in mysql image stored in mysql how does an image store in mysql using java. gime an exam. code. thanks.. Insert Image in MySQL Php Sql Query Insert Php Sql Query Insert This example illustrates how to execute insert query with values in php application. In this example we create two mysql query for insert statement with separate values in the database table. The table before Resize image stored in database Resize image stored in database hi, Can any one please send code for how to resize image stored in database in blob datatype in jsp, i want to resize it to 140(w)*160(h) please help me insert code jsp to access insert code jsp to access insert code jsp to access Insert Image In DB through Servlet - JSP-Servlet Insert Image In DB through Servlet Dear Sir, My previous Query... pre = conn.prepareStatement("insert into MyPictures values...(e.getMessage()); } } } Whether Any entry will be made in web.xml when Insert Image Jsp Query Jsp Query Can we display images in jsp file? If Can, How and please mention code Query in jsp Query in jsp I have one textbox in jsp. when i will write any letter in that textbox i want all values which are starting from that letter in option from database. same as we are getting in google search textbox. Thank query regarding multiple select query regarding multiple select i have a select in jsp with multiple options.When i select multiple values i am not able to insert it into database and how do i retrieve them..please help me with an example code Bulk insert and dynamic schema loading Bulk insert and dynamic schema loading Hi All, I am new to H2... assistance regarding H2 database. Overview of my work : I get the query from reporting engine. Query is split and only the select with where is run how to call jsp from flex how to call jsp from flex hi, i want to know that how can i call a FLEX from JSP. . A good Example will be appreciated Insert Image in DB through Servlet - JSP-Servlet Insert Image in DB through Servlet Dear Sir, My previous Query... = "amar123"; public RetriveImage(){ try{ query=new StringBuffer("insert into Image... pre = conn.prepareStatement("insert into MyPictures values JSP:Passing query as parameter - Java Beginners JSP:Passing query as parameter Hi Friends, I have a real tough... problem. Please help me solve this problem. My Problem is: I have 3 query..."; String qry3="select age from table13"; string q=Insert into emp values update date from stored procedure update date from stored procedure how to works on insert,update data from stored procedure. Insert data using stored procedure how to insert data in database using html+jsp how to insert data in database using html+jsp anyone know what... = 0; // sql query to retrieve values from the specified table. String QueryString = "insert into xxqc_announcement(PID,msg how to call jsp from flex how to call jsp from flex Plz help me out to get JSP from FLEX.... I need your help.... Thanks for future in registration page,hw the details wil b stored in db,n retrieved..... in registration page,hw the details wil b stored in db,n retrieved..... ... will b stored in db....(i knw hw to do it bt i ve some doubts in it),so plz give me... in db(do v need to write select query in d same pgm)? and here is my code tthe Select query in JSP Select query in JSP We are going to describe select query in JSP..., address, destination, salary, dateOfJoin).After that you will insert dummy data. After that we create JSP page than we have make database connection. After How to write a select box and id should be stored in database? How to write a select box and id should be stored in database? Hi...) should be stored in database using SWINGS concept plz help You haven't clarified your query properly. Anyways, here is an example of inserting How to pass query as parameter? - JSP-Servlet How to pass query as parameter? Hi Friends, I have a real tough... of my problem. Please help me solve this problem. My Problem is: I have 3 query..."; String qry3="select age from table13"; string q=Insert into emp values Create After Insert Trigger in SQL )); Create Trigger Stu_Insert The Query create a trigger stu_insert... Create After Insert Trigger in SQL Create a Trigger is a special kind of stored procedure image upload and stored in database - JSP-Servlet image upload and stored in database How can i upload a image and store that image in a database How can i search and insert within a query How can i search and insert within a query if it is possible... then insert these data (batch, semester, degree, seat numbers and marks of mid, lab... that when seat numbers more then one problem generates how insert multiple values insert image - JSP-Servlet insert image hi friends i am mahesh i am trying to insert image into oracle database using JSP but i am not geting so please friends send me the code for inserting image into database using JSP Hi Friend, Try How to call java method on Button click in jsp? How to call java method on Button click in jsp? How to call java method on Button click in jsp Create After Insert Trigger in SQL )); Create Trigger Stu_Insert The Query create a trigger... the query inside the begin-end statement. The insert into add the records... Create After Insert Trigger in SQL   connection and SQL Query - JDBC each time. Now I'm trying to execute a query to insert those values into an oracle... or udate Query it is not accepting as the format for them is executeQuery(String). My goal is to insert into Names Table values stored in temp[0] and temp[1 Insert file data into database connection. Then using the insert query, we have inserted whole file data...Insert file data into database In this section, you will learn how to insert... and BufferedReader class to read the file. The data of the file is then stored can i insert values into two tables by a query - JDBC can i insert values into two tables by a query hi all, can i insert values into two tables by a query Thanks Bala k https call in AJAX - Ajax https call in AJAX Hi, I am trying to call a https URL in my AJAX script which is in my JSP. But it is not getting the reponse from the url i..., Please visit the following links:dbc insert jdbc insert Hi , i want to insert a declared integer variable into a mysql table through jdbc. how to insert that. help me with query... thanks... a table there. After creating a table in database, it insert a rows in the database Inserting Data In Database table using Statement to insert the data in the database from our java program in the table stored... ServletException. Inside this method call the getWriter() method of the PrintWriter class. We can insert the data in the database only and only Insert a row in 'Mysql' table using JSP Code Insert a row in 'Mysql' table using JSP Code In this section, we will discuss about how to insert data in Mysql database using JSP code. Query...; Mysql database Table. Code to insert row in Mysql table : databaseinsertion.jsp Query to insert values in the empty fields in the last row of a table in Mysql database? Query to insert values in the empty fields in the last row of a table in Mysql database? I have some fields filled and some fields empty in the last... row. So what will be the query call dll from java - JSP-Servlet call dll from java Hello all how to call dll file methods in java To Upload and insert the file into Database with Current Date and Time In JSP ; In this tutorial, you will learn how to upload a file through JSP and insert... a Query "INSERT INTO file_tbl set file_data='"+value+"',file_date='"... To Upload and insert the file into Database with Current Date Mysql Insert Mysql Insert Mysql Insert is used to insert the records or rows to the table. Understand with Example The Tutorial illustrate an example from 'Mysql Insert'.To grasp call from java - JavaMail call from java Hi, how i will call servlet from java.. plz its... the java.net package to call a servlet from the java code...(); } } ------------------------------------------- Servlet and JSP Examples Insert Blob(Image) in Mysql table using JSP Insert Blob(Image) in Mysql table using JSP In this Section, we will insert blob data(image) in Mysql database table using JSP code. A Blob stores a binary... pointer which points to the Blob data, data is not directly stored in the row how to insert, retrieve data from,to db(code).... how to insert, retrieve data from,to db(code).... Hi..... i ve... that data will b stored in db(m using sybase). 2.also hw to retrieve the data from... i ve to write(i mean, in login jsp pgm itself r in new jsp pgm To insert attachment file in database in JSP. To insert attachment file in database in JSP. I am doing project in JSP. How to insert attachment file in mysql database? Please suggest some solution. Your inputs is valuable to me. Hi Friend, Visit Here Thanks
http://www.roseindia.net/tutorialhelp/comment/82079
CC-MAIN-2014-52
refinedweb
2,661
62.58
Before you start About this tutorial Python is a popular object-oriented scripting language with a simple syntax and a large developer base. It is a general purpose language and can be used in a variety of settings. It’s also popular as a beginner’s programming language, much like the BASIC language of the 1970s. This tutorial demonstrates the Python language with emphasis on networking programming. I define the basic sockets features of Python in addition to some of Python’s other classes that provide asynchronous sockets. I also detail Python’s application-layer protocol classes, showing how to build Web clients, mail servers and clients, and more. I also demonstrate a simple chat server to illustrate the power of Python for sockets applications. You should have a basic understanding of the standard BSD Sockets API and some experience with the GNU/Linux® environment. Some familiarity with object-oriented concepts is also beneficial. Prerequisites This tutorial and the examples demonstrated in it rely on version 2.4 of Python. You can download this version from the Python Web site (see Resources for a link). To build the Python interpreter, you need the GNU C compiler (gcc) and the configure/make utilities (which are part of any standard GNU/Linux distribution). You should have a basic understanding of the standard BSD Sockets API and some experience with the GNU/Linux environment. Some familiarity with object-oriented concepts is also beneficial. Introducing Python First, I’ll provide a taste of Python. What is Python Python is a general purpose object-oriented scripting language that can be applied to a large variety of problems. It was created in the early 1990s at CWI in Amsterdam and continues to evolve today under the Python Software Foundation. Python is amazingly portable and can be found in almost all operating systems. Python is interpreted and is easy to extend. You can extend Python by adding new modules that include functions, variables, or types through compiled C or C++ functions. You can also easily embed Python within C or C++ programs, allowing you to extend an application with scripting capabilities. One of the most useful aspects of Python is its massive number of extension modules. These modules provide standard functions such as string or list processing, but there are also application-layer modules for video and image processing, audio processing, and yes, networking. A taste of Python I’ll give you a sense of what Python is all about. As an interpreted language, it’s easy to use Python to try out ideas and quickly prototype software. Python programs can be interpreted as a whole or line by line. You can test the following snippets of Python code by first running Python, then typing each line one at a time. After Python is invoked, a prompt (>>>) appears to allow you to type commands. Note that indentation is important in Python, so the preceding spaces in a line must not be ignored: Listing 1. Some Python samples to try out #Open a file, read each line, and print it out for line in open('file.txt'): print line #Create a file and write to it file = open("test.txt", "w") file.write("test line\n") file.close() #Create a small dictionary of names and ages and manipulate family = {'Megan': 13, 'Elise': 8, 'Marc': 6} #results in 8 family'Elise' #Remove the key/value pair del family'Elise' #Create a list and a function that doubles its input. Map the # function to each of the elements of the list (creating a new # list as a result). arr = 1, 2, 3, 4, 5def double(x): return x*x map(double, arr) #Create a class, inherit by another, and then instantiate it and # invoke its methods. class Simple: def init(self, name): self.name = name def hello(self): print self.name+" says hi." class Simple2(Simple): def goodbye(self): print self.name+" says goodbye." me = Simple2("Tim") me.hello() me.goodbye() Why use Python The number one reason to learn and use Python is its popularity. The size of its user base and the growing number of applications built with Python make it a worthwhile investment. You find Python in several development areas — it’s used to build system utilities, as a glue language for program integration, for Internet applications, and for rapid prototyping.. Now, let’s dig into sockets programming in Python. Python. Table 1. Python classes and modules Let’s look at each of these modules to understand what they can do for you. The Socket module. Table 2. Class methods for the Socket module Table 3. Instance methods for the Socket module The difference between a class method and an instance method is that instance methods require a socket instance to be performed (returned from socket), where class methods do not. The SocketServer module import SocketServer class hwRequestHandler( SocketServer.StreamRequestHandler ): def handle( self ): self.wfile.write("Hello World!\n") server = SocketServer.TCPServer( ("", 2525), hwRequestHandler ) server.serve_forever() That’s it! Python permits a number of variations on this theme, including UDPServers and forking and threading servers. Sockets programming in Python In languages with sockets, the socket is universally the same — it’s a conduit between the two applications that can communicate with one another. Preliminaries Whether you’re writing a sockets application in Python, Perl, Ruby, Scheme, or any other useful language (and by useful I mean languages that have a sockets interface), the socket is universally the same. It’s a conduit between the two applications that can communicate with one another (either locally on a single machine or between two machines in separate locations). The difference with sockets programming in a language like Python is in the helper classes and methods that can simplify sockets programming. In this section I’ll demonstrate the Python socket API. You can execute the Python interpreter with a script or, if you execute Python by itself, you can interact with it one line at a time. In this way, you can see the result of each method invoked. The following example illustrates interacting with the Python interpreter. Here, I use the socket class method gethostbyname to resolve a fully qualified domain name () to a string quad-dotted IP address (‘129.42.19.99’): Listing 3. Using the socket API from the interpreter command line [camus]$ python Python 2.4 (#1, Feb 20 2005, 11:25:45) [GCC 3.2.2 20030222 (Red Hat Linux 3.2.2‑5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.gethostbyname('') '129.42.19.99' >>> After the socket module is imported, I invoke the gethostbyname class method to resolve the domain name to an IP address. Now, I’ll discuss the basic socket methods and communicating through sockets. Feel free to follow along with your Python interpreter. Creating and destroying sockets To create a new socket, you use the socket method of the socket class. This is a class method because you don’t yet have a socket object from which to apply the methods. The socket method is similar to the BSD API, as demonstrated in the creation of a stream (TCP) and datagram (UDP) socket: Listing 4. Creating stream and datagram sockets streamSock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) dgramSock = socket.socket( socket.AF_INET, socket.SOCK_DGRAM ) In each case, a socket object is returned. The AF_INET symbol — argument one — indicates that you’re requesting an Internet Protocol (IP) socket, specifically IPv4. The second argument is the transport protocol type ( SOCK_STREAM for TCP sockets and SOCK_DGRAM for UDP sockets). If your underlying operating system supports IPv6, you can also specify socket.AF_INET6 to create an IPv6 socket. To close a connected socket, you use the close method: streamSock.close() streamSock.close() Finally, you can delete a socket with the del statement: del streamSock This statement permanently removes the socket object. Attempting to reference the socket thereafter produces an error. Socket addresses An endpoint address for a socket is a tuple consisting of an interface address and a port number. Because Python can represent tuples easily, the address and port are represented as such. This illustrates an endpoint for interface address 192.168.1.1 and port 80: ( '192.168.1.1', 80 ) You can also use a fully qualified domain name here, such as: ( '', 25 ) This example is simple and certainly beats the sockaddr_in manipulation that’s necessary in C. The following discussion provides examples of addresses in Python. Server sockets Server sockets are typically those that expose a service on a network. Because server and client sockets are created in different ways, I discuss them independently. After you create the socket, you use the bind method to bind an address to it, the listen method to place it in the listening state, and finally the accept method to accept a new client connection. This is demonstrated below: Listing 5. Using server sockets sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) sock.bind( ('', 2525) ) sock.listen( 5 ) newsock, (remhost, remport) = sock.accept() For this server, the address ('', 2525) is used which means that the wildcard is used for the interface address (''), allowing incoming connections from any interface on the host. You also bind to port number 2525. Note here that the accept method returns not only the new socket object that represents the client connection ( newsock) but also an address tuple (the remote address and port number of the peer end of the socket). Python’s SocketServer module can simplify this process even further, as demonstrated above. You can also create datagram servers, but they are connectionless and therefore have no associated accept method. The following example creates a datagram server socket: Listing 6. Creating a datagram server socket sock = socket.socket( socket.AF_INET, socket.SOCK_DGRAM ) sock.bind( ('', 2525) ) The upcoming discussion of sockets I/O shows how I/O works for both stream and datagram sockets. Now, let’s explore how a client creates a socket and connects it to a server. Client sockets The mechanisms for creating and connecting client sockets are similar to the setup of server sockets. Upon creating a socket, an address is needed — not to locally bind the socket (as is the case with a server) but rather to identify where the socket should attach. Say there’s a server on a host with an interface IP address of ‘192.168.1.1’ and port 2525. The following code creates a new socket and connects it to the defined server: Listing 7. Creating a stream socket and connecting to the server sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) sock.connect( ('192.168.1.1', 2525) ) For datagram sockets, the process is a bit different. Recall that datagram sockets are by nature disconnected. One way to the think about it is as follows: Whereas stream sockets are pipes between two endpoints, datagram sockets are message-based, able to communicate with multiple peers at the same time. Here’s an example of a datagram client. Listing 8. Creating a datagram socket and connecting to the server sock = socket.socket( socket.AF_INET, sock.sock_DGRAM ) sock.connect( ('192.168.1.1', 2525) ) What’s different here is that even though I’ve used the connect method, there’s no real connection between the client and server. The connect here is a simplification for later I/O. Typically in datagram sockets, you must provide the destination information with the data that you want to send. By using connect, I’ve cached this information with the client and send methods can occur much like stream socket versions (no destination address is necessary). You can call connect again to re-specify the target of the datagram client’s messages. Stream sockets I/O Sending or receiving data through stream sockets is simple in Python. Several methods exist to move data through a stream socket (such as send, recv, read, and write). This first example demonstrates a server and client for stream sockets. In this demonstration, the server echoes whatever it receives from the client. The echo stream server is presented in Listing 9. Upon creating a new stream socket, an address is bound to it (accept connections from any interface and port 45000) and then the listen method is invoked to enable incoming connections. The echo server then goes into a loop for client connections. The accept method is called and blocks (that is, does not return) until a new client connects, at which point the new client socket is returned along with address information for the remote client. With this new client socket, I call recv to get a string from the peer, then write this string back out to the socket. I then immediately close the socket. Listing 9. Simple Python stream echo server import socket srvsock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) srvsock.bind( ('', 23000) ) srvsock.listen( 5 ) while 1: clisock, (remhost, remport) = srvsock.accept() str = clisock.recv(100) clisock.send( str ) clisock.close() Listing 10 shows the echo client that corresponds with the server in Listing 9. Upon creating a new stream socket, the connect method is used to attach this socket to the server. When connected (when the connect method returns), the client emits a simple text message with the send method, then awaits the echo with the recv method. The close method is performed to close the socket. Listing 10. Simple Python stream echo server import socket clisock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) clisock.connect( ('', 23000) ) clisock.send("Hello World\n") print clisock.recv(100) clisock.close() Datagram sockets I/O Datagram sockets are disconnected by nature which means that communication requires that a destination address be provided. Similarly, when a message is received through a socket, the source of the data must be returned. The recvfrom and sendto methods support the additional address information as you can see in the datagram echo server and client implementations. Listing 11 shows the datagram echo server. A socket is first created and then bound to an address using the bind method. An infinite loop is then entered for serving client requests. The recvfrom method receives a message from the datagram socket and returns not only the message but also the address of the source of the message. This information is then turned around with the sendto method to return the message to the source. Listing 11. Simple Python datagram echo server import socket dgramSock = socket.socket( socket.AF_INET, socket.SOCK_DGRAM ) dgramSock.bind( ('', 23000) ) while 1: msg, (addr, port) = dgramSock.recvfrom( 100 ) dgramSock.sendto( msg, (addr, port) ) The datagram client is even simpler. After creating a datagram socket, I use the sendto method to send a message to a specific address. (Remember: Datagrams have no connection.) After sendto finishes, I await the echo response with recv, then print it. Note that I don’t use recvfrom here because I’m not interested in the peer address information. Listing 12. Simple Python datagram echo client import socket dgramSock = socket.socket( socket.AF_INET, socket.SOCK_DGRAM ) dgramSock.sendto( "Hello World\n", ('', 23000) ) print dgramSock.recv( 100 ) dgramSock.close() Socket options Sockets default to a set of standard behaviors, but it’s possible to alter the behavior of a socket using options. You manipulate socket options with the setsockopt method and capture them with the getsockopt method. Using socket options is simple in Python, as demonstrated in Listing 13. In the first example, I read the size of the socket send buffer. In the second example, I get the value of the SO_REUSEADDR option (reuse the address within the TIME_WAIT period) and then enable it. Listing 13. Using socket options sock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) #Get the size of the socket's send buffer bufsize = sock.getsockopt( socket.SOL_SOCKET, socket.SO_SNDBUF ) #Get the state of the SO_REUSEADDR option state = sock.getsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR ) #Enable the SO_REUSEADDR option sock.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1 ) The SO_REUSEADDR option is most often used in socket server development. You can increase the socket send and receive buffers for greater performance, but given that you’re operating here in an interpreted scripting language, it may not provide you with much benefit. Asynchronous I/O Python offers asynchronous I/O as part of the select module. This feature is similar to the C select mechanism but has some simplifications. I’ll first introduce select and then show you how to use it in Python. The select method allows you to multiplex events for several sockets and for several different events. For example, you can instruct select to notify you when a socket has data available, when it’s possible to write data through a socket, and when an error occurs on a socket; and you can perform these actions for many sockets at the same time. Where C works with bitmaps, Python uses lists to represent the descriptors to monitor and also the return descriptors whose constraints are satisfied. Consider the following example in which you await some input from standard input: Listing 14. Awaiting input from stdin rlist, wlist, elist = select.select( [sys.stdin], [], [] ) print sys.stdin.read() The arguments passed to select are lists representing read events, write events, and error events. The select method returns three lists containing the objects whose events were satisfied (read, write, exception). In this example, upon return rlist should be [sys.stdin], indicating that data are available to read on stdin. This data are then read with the read method. The select method also works on socket descriptors. In the following example (see Listing 15), two client sockets are created and connected to a remote peer. The select method is then used to identify which socket has data available for reading. The data are then read and emitted to stdout. Listing 15. Demonstrating the select method with multiple sockets import socket import select sock1 = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) sock2 = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) sock1.connect( ('192.168.1.1', 25) ) sock2.connect( ('192.168.1.1', 25) ) while 1: #Await a read event rlist, wlist, elist = select.select( [sock1, sock2], [], [], 5 ) #Test for timeout if [rlist, wlist, elist] == [ [], [], [] ]: print "Five seconds elapsed.\n" else: #Loop through each socket in rlist, read and print the available data for sock in rlist: print sock.recv( 100 ) Building a Python chat server A simple chat server You’ve explored the basic networking APIs for Python; now you can put this knowledge to use in a simple application. In this section, you’ll build a simple chat server. Using Telnet, clients can connect to your Python chat server and globally communicate with one another. Messages submitted to the chat server are viewed by others (in addition to management information, such as clients joining or leaving the chat server). This model is shown graphically in Figure 1. Figure 1. The chat server uses the select method to support an arbitrary number of clients An important requirement to place on your chat server is that it must be scalable. The server must be able to support an arbitrary number of stream (TCP) clients. To support an arbitrary number of clients, you use the select method to asynchronously manage your client list. But you also use a feature of select for your server socket. The read event of select determines when a client has data available for reading, but it also can be used to determine when a server socket has a new client trying to connect. You exploit this behavior to simplify the development of the server. Next, I’ll explore the source of the Python chat server and identify the ways in which Python helps simplify its implementation. The ChatServer class Let’s start by looking at the Python chat server class and the __init__ method — the constructor that’s invoked when a new instance is created. The class is made up of four methods. The run method is invoked to start the server and permit client connections. The broadcast_string and accept_new_connection methods are used internally in the class and will be discussed shortly. The __init__ method is a special method that’s invoked when a new instance of the class is created. Note that all methods take the self argument, a reference to the class instance itself (much like the this parameter in C++). You’ll see the self parameter, part of all instance methods, used here to access instance variables. The __init__ method creates three instance variables. The port is the port number for the server (passed in the constructor). The srvsock is the socket object for this instance, and descriptors is a list that contains each socket object for the class. You use this list within the select method to identify the read event list. Finally, Listing 16 shows the code for the __init__ method. After creating a stream socket, the SO_REUSEADDR socket option is enabled so that the server can be quickly restarted, if necessary. The wildcard address is bound with the defined port number. Then the listen method is invoked to permit incoming connections. The server socket is added to the descriptors list (the only element at present), but all client sockets will be added as they arrive (see accept_new_connection). A salutation is provided to stdout indicating that the server has started. Listing 16. The ChatServer class with the init method import socket import select class ChatServer: def init( self, port ): self.port = port; self.srvsock = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) self.srvsock.setsockopt( socket.SOL_SOCKET, socket.SO_REUSEADDR, 1 ) self.srvsock.bind( ("", port) ) self.srvsock.listen( 5 ) self.descriptors = self.srvsock print 'ChatServer started on port %s' % port def run( self ): ... def broadcast_string( self, str, omit_sock ): ... def accept_new_connection( self ): ... The run method The run method is the server loop for your chat server (see Listing 17). When called, it enters an infinite loop, providing communication between connected clients. The core of the server is the select method. I pass the descriptor list (which contains all the server’s sockets) as the read event list to select (and null lists for write and exception). When a read event is detected, it’s returned as sread. (I ignore the swrite and sexc lists.) The sread list contains the socket objects that will be serviced. I iterate through the sread list, checking each socket object found. The first check in the iterator loop is if the socket object is the server. If it is, a new client is trying to connect and the accept_new_connection method is called. Otherwise, the client socket is read. If a null is returned from recv, the peer socket closed. In this case, I construct a message and send it to all connected clients, close the peer socket, and remove the corresponding object from the descriptor list. If the recv return is not null, a message is available and stored in str. This message is distributed to all other clients using broadcast_string. Listing 17. The chat server run method is the core of the chat server def run( self ): while 1: #Await an event on a readable socket descriptor (sread, swrite, sexc) = select.select( self.descriptors, [], [] ) #Iterate through the tagged read descriptors for sock in sread: #Received a connect to the server (listening) socket if sock == self.srvsock: self.accept_new_connection() else: #Received something on a client socket str = sock.recv(100) #Check to see if the peer socket closed if str == '': host,port = sock.getpeername() str = 'Client left %s:%s\r\n' % (host, port) self.broadcast_string( str, sock ) sock.close self.descriptors.remove(sock) else: host,port = sock.getpeername() newstr = '[%s:%s] %s' % (host, port, str) self.broadcast_string( newstr, sock ) Helper methods The two helper methods in the chat server class provide methods for accepting new client connections and broadcasting messages to the connected clients. The accept_new_connection method (see Listing 18) is called when a new client is detected on the incoming connection queue. The accept method is used to accept the connection, which returns the new socket object and remote address information. I immediately add the new socket to the descriptors list, then send a salutation to the new client welcoming the client to the chat. I create a string identifying that the client has connected and broadcast this information to the group using broadcast_string (see Listing 19). Note that in addition to the string being broadcast, a socket object is also passed. The reason is that I want to selectively omit some sockets from getting certain messages. For example, when a client sends a message to the group, the message goes to the group but not back to itself. When I generate the status message identifying a new client joining the group, it goes to the group but not the new client. This task is performed in broadcast_string with the omit_sock argument. This method walks through the descriptors list and sends the string to all sockets that are not the server socket and not omit_sock. Listing 18. Accepting a new client connection on the chat server def accept_new_connection( self ): newsock, (remhost, remport) = self.srvsock.accept() self.descriptors.append( newsock ) newsock.send("You're connected to the Python chatserver\r\n") str = 'Client joined %s:%s\r\n' % (remhost, remport) self.broadcast_string( str, newsock ) Listing 19. Broadcasting a message to the chat group def broadcast_string( self, str, omit_sock ): for sock in self.descriptors: if sock != self.srvsock and sock != omit_sock: sock.send(str) print str, Instantiating a new ChatServer Now that you’ve seen the Python chat server (under 50 lines of code), let’s see how to instantiate a new chat server object in Python. Start the server by creating a new ChatServer object (passing the port number to be used), then calling the run method to start the server and allow incoming connections: Listing 20. Instantiating a new chat server myServer = ChatServer( 2626 ) myServer.run() At this point, the server is running and you can connect to it from one or more clients. You can also chain methods together to simplify this process (as if it needs to be simpler): Listing 21. Chaining methods myServer = ChatServer( 2626 ).run() which achieves the same result. I’ll show the ChatServer class in operation. Demonstrating the ChatServer Here’s the ChatServer in action. I show the output of the ChatServer (see Listing 22) and the dialog between two clients (see Listing 23 and Listing 24). The user-entered text appears in bold. Listing 22. Output from the ChatServer [plato]$ python pchatsrvr.py ChatServer started on port 2626 Client joined 127.0.0.1:37993 Client joined 127.0.0.1:37994 [127.0.0.1:37994] Hello, is anyone there? [127.0.0.1:37993] Yes, I'm here. [127.0.0.1:37993] Client left 127.0.0.1:37993 Listing 23. Output from Chat Client #1 [plato]$ telnet localhost 2626 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. You're connected to the Python chatserver Client joined 127.0.0.1:37994 [127.0.0.1:37994] Hello, is anyone there? Yes, I'm here.^] telnet> close Connection closed. [plato]$ Listing 24. Output from Chat Client #2 [plato]$ telnet localhost 2626 Trying 127.0.0.1... Connected to localhost.localdomain (127.0.0.1). Escape character is '^]'. You're connected to the Python chatserver Hello, is anyone there? [127.0.0.1:37993] Yes, I'm here. [127.0.0.1:37993] Client left 127.0.0.1:37993 As you see in Listing 22, all dialog between all clients is emitted to stdout, including client connect and disconnect messages. High-level networking classes Networking modules Python includes several specialized modules for application-layer protocols (built on the standard socket module). The available modules are wide and varied and they provide module implementations of the Hypertext Transfer Protocol (HTTP), the Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP) and Post Office Protocol (POP3), the Network News Transfer Protocol (NNTP), XML-RPC (remote procedure call), FTP, and many others. This section demonstrates the modules shown in Table 4. Table 4. Useful application-layer protocol modules The httplib (HTTP client) The HTTP client interface can be useful when developing Web robots or other Internet scraping agents. The Web protocol is request/response in nature over stream sockets. Python makes it easy to build Web robots through a simple Web interface. Listing 25 demonstrates the httplib module. You create a new HTTP client instance with HTTPConnection, providing the Web site to which you want to connect. With this new object ( httpconn), you can request files with the request method. Within request, you specify the HTTP GET method (which requests a file from the server, compared to HEAD which simply retrieves information about the file). The getresponse method parses the HTTP response header to understand if an error was returned. If the file was successfully retrieved, the read method on the new response object returns and prints the text. Listing 25. Building a simple (non-rendering) HTTP client with httplib import httplib httpconn = httplib.HTTPConnection("") httpconn.request("GET", "/developerworks/index.html") resp = httpconn.getresponse() if resp.reason == "OK": resp_data = resp.read() print resp_data httpconn.close() The smptlib (SMTP client) SMTP allows you to send e-mail messages to a mail server which can be useful in networking systems to relay status about the operation of a device. The Python module for sending e-mail messages is simple and consists of creating an SMTP object, sending an e-mail message using the sendmail method, then closing the connection with the quit method. The example in Listing 26 demonstrates sending a simple e-mail message. The msg string contains the body of the message (which should include the subject line). Listing 26. Sending a short e-mail message with smtplib import smtplib fromAdrs = 'mtj@mtjones.com' toAdrs = 'you@mail.com' msg = 'From: me@mail.com\r\nTo: you@mail.com\r\nSubject:Hello\r\nHi!\r\n' mailClient = smtplib.SMTP('192.168.1.1') mailClient.sendmail( fromAdrs, toAdrs, msg ) mailClient.quit The poplib (POP3 client) POP3 is another useful application-layer protocol for which a module exists within Python. The POP3 protocol allows you to connect to a mail server and download new mail, which can be useful for remote commanding — embedding commands within the body of an e-mail message. After executing the embedded command, you can use smptlib to return a response e-mail message to the source. This demonstration in Listing 27 shows a simple application that connects to a mail server and emits the subject lines for all pending e-mail for the user. The poplib is relatively simple but offers several methods for gathering and managing e-mail at the server. In this example, I create a new POP3 object with the POP3 method, specifying the mail server. The user and pass_ methods authenticate the application to the server; the stat method returns the number of messages waiting for the user and the total number of bytes taken up by all messages. Next, I loop through each available message and use the retr method to grab the next e-mail message. This method returns a list of the form: (response, ['line, ...], octets) where response is the POP3 response for the particular message, the line list represents the individual lines of the e-mail message, and the final element, octets, is the number of bytes for the e-mail message. The inner loop simply iterates over the second element ([1]) of the list which is the list of the e-mail message body. For each line, I test whether ‘Subject:’ is present; if so, I print this line. After all e-mail messages have been checked, a call to the quit method ends the POP3 session. Instead of using the retr method, you could also use the top method to extract the header for the e-mail message. This step would be faster and minimize the amount of data transferred to this client. Listing 27. Retrieving e-mail messages from a POP3 mail server and emitting the subject line import poplib import re popClient = poplib.POP3('192.168.1.1') popClient.user('user') popClient.pass_('password') numMsgs, mboxSize = popClient.stat() print "Number of messages ", numMsgs print "Mailbox size", mboxSize print for id in range (numMsgs): for mail in popClient.retr(id+1)[1]: if re.search( 'Subject:', mail ): print mail print popClient.quit() Summary This tutorial reviewed the basics of the sockets API and showed how to build networking applications in Python. The standard sockets module was introduced as a way to build both client and server applications, as well as the SocketServer module which simplifies the construction of simple socket servers. I presented a simple chat server implemented in Python that offered support for a scalable number of clients using the select method. In closing, I previewed some of Python’s high-level networking classes that simplify the development of applications requiring application-layer networking protocols. Python is an interesting and useful language that is well worth your time to learn.
https://developer.ibm.com/tutorials/l-pysocks/
CC-MAIN-2022-05
refinedweb
5,506
57.27
This is a mobile version, full one is here. Yegor Bugayenko 12 February 2015 Code For the User, Not for Yourself First, no matter what the methodology is, we all write software for our users (a.k.a. customers, project sponsors, end users, or clients). Second, no matter what the methodology is, we write incrementally, releasing features and bug fixes one by one. Maybe I'm saying something absolutely obvious here, but it's important to remember that each new version should first of all satisfy the needs of the user, not of us programmers. In other words, the way we decompose a big task into smaller pieces should be user-targeted, and that's why you always work top down. Let's see what I mean through a practical example. Say I'm contracted by a friend of mine to create a word-counting command line tool very similar to wc. He promised to pay me $200 for this work, and I promised him I'd deliver the product in two increments—an alpha and beta version. I promised him I'd release the alpha version on Saturday and the beta version on Sunday. He is going to pay me $100 after the first release and the rest after the second release. I'll write in C, and he will pay in cash. The tool is very primitive, and it only took me a few minutes to write. Take a look at it: #include <stdio.h> #include <unistd.h> int main() { char ch; int count = 0; while (1) { if (read(STDIN_FILENO, &ch, 1) <= 0) { break; } if (ch == ' ') { ++count; } } if (count > 0) { ++count; } printf("%d\n", count); return 0; } But let's be professional and not forget about build automation and unit testing. Here is a simple Makefile that does them both: all: wc test wc: wc.c gcc -o wc wc.c test: wc echo '' | ./wc | grep '0' echo 'Hello, world! How are you?' | ./wc | grep '5' Now I run make from a command line and get this output: $ make echo '' | ./wc | grep '0' 0 echo 'Hello, world! How are you?' | ./wc | grep '5' 5 All clean! I'm ready to get my $200. Wait, the deal was to deliver two versions and get cash in two installments. Let's back up a little and think—how can we break this small tool into two parts? On first thought, let's release the tool itself first and build automation and testing next. Is that a good idea? Can we deliver any software without running it first with a test? How can I be sure that it works if I don't ship tests together with it? What will my friend think about me releasing anything without tests? This would be a total embarrassment. Okay, let's release Makefile first and wc.c next. But what will my friend do with a couple of tests and no product in hand? This first release will be absolutely pointless, and I won't get my $100. Now we're getting to the point of this article. What I'm trying to say is that every new increment must add some value to the product as it is perceived by the customer, not by us programmers. The Makefile is definitely a valuable artifact, but it provides no value to my friend. He doesn't need it, but I need it. Here is what I'm going to do. I'll release a skeleton of the tool, backed by the tests but with an absolutely dummy implementation. Look at it: #include <stdio.h> int main() { printf("5\n"); return 0; } And I will modify the Makefile accordingly. I will disable the first test to make sure the build passes. Does my tool work? Yes, it does. Does it count words? Yes, it does for some inputs. Does it have value to my friend. Obviously! He can run it from the command line, and he can pass a file as an input. He will always get number "5" as a result of counting, though. That's a bummer, but it's an alpha version. He doesn't expect it to work perfectly. However, it works, it is backed by tests, and it is properly packaged. What I just did is a top-down approach to design. First of all, I created something that provides value to my customer. I made sure it also satisfies my technical objectives, like proper unit test coverage and build automation. But the most important goal for me was to make sure my friend received something ... and paid me.
https://www.yegor256.com/2015/02/12/top-down-design.amp.html
CC-MAIN-2018-30
refinedweb
768
84.78
On 06/25/2012 09:34 AM, Kevin Wolf wrote: > Am 25.06.2012 16:51, schrieb Corey Bryant: >> Thanks for catching this. I'll fix this in v5. In terms of platforms >> that support dup3 vs dup2, I'm assuming the following preprocessor >> checks will do what we need: >> >> #if defined(__linux__) || defined(__CYGWIN__) >> dup3(fd, monfd->fd, O_CLOEXEC) >> #else >> dup2()+fcntl(F_GETFD/F_SETFD) >> #endif > > Checking for dup3() in configure might be the better option. It's not > like it has been there forever, even on Linux. Agreed. Furthermore, since dup3() has been proposed for addition into POSIX[1], it won't be long before other platforms add it. Always favor feature checks (a configure probe for dup3) over platform checks (hard-coding the assumption that Linux and Cygwin are the only platforms that will ever provide it). [1] -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2012-June/msg01122.html
CC-MAIN-2017-26
refinedweb
159
64.61
The Java Specialists' Newsletter Issue 0892004-05-26 Category: Exceptions Java version: Sun JDK 1.5.0-beta GitHub Subscribe Free RSS Feed Welcome to the 89th edition of The Java(tm) Specialists' Newsletter. We have probably the most elite Java newsletter in the world, so if you are a member, you are a part of that elite! If you know of people who are really good at Java, please let them know about this newsletter. Today was an exceptionally beautiful day in Cape Town. No wind, nice warm weather, as if it were summer. My friend Herman Lintvelt (who has authored a few newsletters) and I had a great lunch dining on huge slabs of meat and fine South African red wine, to celebrate the "good life". I am sitting outside on my balcony at 23:00 enjoying a mild evening at 25 degrees celsius :-) It will probably rain tomorrow. A small change in my newsletter structure is that from now on, the heading will show which version of Java I was working with when I wrote the newsletter. Since The Java(tm) Specialists' Newsletter explores interesting features, we sometimes stumble across "features" that are actually bugs and that are removed in the next release. This has caused confusion in the past, especially when readers look at older newsletters. I would like to thank all those who sent me their quotes of what they thought about the newsletter. I was touched, and have new motivation and energy to write these newsletters :-) Last week I presented a Java Course at a South African company. During the course, one of the C++ programmers questioned me about generics in Java. I try to stay away from beta versions for production code, but curiosity got the better of me, so I tried playing with it. IntelliJ IDEA 4.0 was not too happy with the new for construct, so I tried IDEA 4.1, which worked fine. I must admit that generics take some getting used to, and changing your code is not always straightforward. For example, I could not find a way of using generics in a static context. In a future newsletter, I will write about some of the experiences of migrating my existing code to generics. A Google on "generics java" gave me approximately 50'000 hits, so I won't bore you with "yet another how-to-do Java Generics newsletter" until I have something interesting to write about them :-) for Instead, like most topics in this newsletter series, I will write about something that I discovered by chance, whilst I was glancing at the source code of Sun JDK 1.5 beta. Google did not reveal any newsletters about this topic, so here goes... NEW: Please see our new "Extreme Java" course, combining concurrency, a little bit of performance and Java 8. Extreme Java - Concurrency & Performance for Java 8. In my experience, all Java projects contain poorly written exception handling code. Let's take a simple example, and make it complicated: import java.sql.*; import java.util.List; /** * You'll have to compile with JDK 1.5 and use the switch * javac -source 1.5 */ public class DatabaseQueryCode { private final Connection con; public DatabaseQueryCode(Connection con) { this.con = con; } /** * Take a list of Strings and execute all the queries in one * transaction. */ public void executeQueries(List<String> queries) throws SQLException { con.setAutoCommit(false); Statement st = con.createStatement(); for(String s : queries) { // I love this construct :-) st.execute(s); } con.commit(); st.close(); con.setAutoCommit(true); } } That code is obviously not as correct as it could have been. If we fail halfway through the method, we won't set the auto-commit to be true, so let's change that: true public void executeQueries(List<String> queries) throws SQLException { con.setAutoCommit(false); Statement st = con.createStatement(); try { for(String s : queries) { st.execute(s); } con.commit(); st.close(); } finally { con.setAutoCommit(true); } } This is better, but also not ideal. If any of the queries fail, we want to roll back and still close the statement, and we want to make sure that we do not close the statement if it was not open, so let's change it again. public void executeQueries(List<String> queries) throws SQLException { con.setAutoCommit(false); Statement st = con.createStatement(); try { for(String s : queries) { st.execute(s); } con.commit(); } catch(SQLException ex) { con.rollback(); } finally { st.close(); con.setAutoCommit(true); } } Good, this is better, but what happens if one of the Strings is null and we get a NullPointerException? What happens if we run out of memory and get an OutOfMemoryError? What happens if we get an OOME and at the same time the connection does not work anymore? Then the finally would cause an exception, which would mask the OOME, and make it disappear. There are lots of possibilities, and if we try to cater for all eventualities (excuse the pun) then we will go crazy trying and our code will look rather complicated. null NullPointerException OutOfMemoryError finally My point with this example was not to show you how to write the perfect database exception handling. Truth is, I don't know how to make it bullet proof. Maybe water balloon proof, but not bullet proof. So in the real world, how are exceptions handled? Frequently, exceptions are stubbed out and ignored, because the writer of the code did not know how to handle the error (and was going to go back and fix it, one day, but the project manager was breathing down his neck and the release had to go out that afternoon). This is bad, since you then do not know that something has gone awry. On the other hand, if the exception bubbles up the call stack, it may kill the thread, and you may never know that there was an exception. I have witnessed production code do things like this (I kid you not): try { // do something } catch(Exception ex) { // log to some obscure log file, maybe return ""; } The effect was that the webpage showed empty strings as values when something went wrong with the code. My approach to exceptions is to have a central mechanism that deals with any exceptions that I am not 100% sure of how to handle. Whenever something goes wrong, this central place is notified. However, what happens when you are using someone else's code and their threads die without warning? An amusing example was an early version of Together/J. I enjoyed using Together/J, even though it was rather memory hungry. Instead of starting with 512MB as default maximum old generation memory size, I set it to only use 92MB. This made Together work faster and save resources. However, occasionally random threads would simply die, so you could perhaps not print anymore or some other functionality would vanish. In newsletter 81, I described a way that you could catch unhandled exceptions in your GUI code, by starting up your GUI in a special thread group. I had assumed that this was the way that uncaught exceptions should be handled in future. The old way of catching these exceptions was to set a system property, but in the JDK code comments that was described as a temporary workaround. If you look at the java.lang.Thread JavaDocs, you will notice some new methods that can help us, specifically setDefaultUncaughtExceptionHandler() and setUncaughtExceptionHandler(). With these two methods, you can specify an exception handler for an individual thread (setUncaughtExceptionHandler()) or you can set a default handler for all threads that do not have their own UncaughtExceptionHandler (setDefaultUncaughtExceptionHandler()). To contrast this with the earlier newsletter, please use the Gui class of newsletter 81 and compile it together with these two classes, DefaultExceptionHandler and EvenBetterGui: import javax.swing.*; import java.awt.*; // did you know that you could import inner classes? import java.lang.Thread.*; public class DefaultExceptionHandler implements UncaughtExceptionHandler { public void uncaughtException(Thread t, Throwable e) { // Here you should have a more robust, permanent record of problems JOptionPane.showMessageDialog(findActiveFrame(), e.toString(), "Exception Occurred", JOptionPane.OK_OPTION); e.printStackTrace(); } private Frame findActiveFrame() { Frame[] frames = JFrame.getFrames(); for (int i = 0; i < frames.length; i++) { if (frames[i].isVisible()) { return frames[i]; } } return null; } } import javax.swing.*; public class EvenBetterGui { public static void main(String[] args) { Thread.setDefaultUncaughtExceptionHandler( new DefaultExceptionHandler()); Gui gui = new Gui(); gui.pack(); gui.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); gui.setVisible(true); } } We can now catch all unhandled exceptions by calling Thread.setDefaultUncaughtExceptionHandler() and passing in our own exception handler (subject to security manager permissions of course). Thread.setDefaultUncaughtExceptionHandler() In my opinion, this is a great addition to the Java Programming Language, and I am looking forward to finding more nuggets that will convince me to switch over to JDK 1.5 permanently. A nice resource for finding differences between JDK 1.4.2 and JDK 1.5 is JDiff. [I discovered after sending this newsletter that the author of JDiff is on our newsletter :-] Thread has some other rather useful methods, such as getStackTrace() and getAllStackTraces(). What else can you do in JDK 1.5? You can measure elapsed time in nanoseconds instead of milliseconds, which should make performance calculations more accurate (or more suspect?). Have a look at System.nanoTime(). getStackTrace() getAllStackTraces() System.nanoTime() That's all for this newsletter. I have to get to sleep before I catch yet another cold from overworking... Kind regards Heinz Exceptions Articles Related Java Course
http://www.javaspecialists.co.za/archive/Issue089.html
CC-MAIN-2016-30
refinedweb
1,567
56.15
Dec 17, 2007 10:07 AM|onlinedev|LINK Hi All, We are planning to re-develope our star product from Power Builder to ASP.NET 2.0. While doing some R&D on Dot Net technology, I could see lot of Posts on the MVC Architecture. FYI: This is quite a big application and has taken 60 man years. Using ASP.Net technology, it is estimated that the first release will take at least 20 man years work. The basic architecture, we have designed is as follows : 1. Presentation Layer - Aspx pages using ObjectDataSource, GridView, Formview and Third Party controls likes Infragistic (Customer.aspx) 2. BLL - Business Logic Layer (Customer.vb - All the business Logic is written here) 3. DAL - Data Access Layer (DAL.Vb -> Interact with the Database) After reading a bit about MVC architecture, we are quite attracted, since it is written at various places that MVC is one of the best architecture for the enterprise level applications. But, I could not really see a good sample project, which can really guide us that how to implement this. May be, my point of view is wrong, since I am quite new to the MVC architecture but, I have seen few example and my feeling is that instead of making the development simple, it is making more complex. Things, which can be done in a simpler way, it involves number of classes, functions and calls to the controller, view etc. and finally display the data to the presentation layer. I will apperciate, if someone can provide a sample project, which can make us understand the advantages about the implementation of the MVC architecture using ASP.NET. I am looking for a simple project, where a simple page attached to a simple table say 'CUSTOMER' and doing all the operations like Add, Delete, Edit, Save etc. Thanks in Advance, onlinedev Member 230 Points ASPInsiders Dec 17, 2007 10:24 AM|subdigital|LINK First, this is off-topic, but for an application that is going to take 20 man-years to build, I sincerely hope that you plan a little bit more than Presentation -> BLL -> DAL. I'd also recommend frequent releases in an agile fashion to make sure the project is a success, but that is conversation for another day. About MVC, it is a perfect fit for a large system like you are describing. You have so much more control in testing and you don't have to wrestle any viewstate or complex page lifecycle issues that you might see people posting here about. There is a reference application currently under construction in MVCContrib () so check that out and get a feel for it. Dec 17, 2007 10:38 AM|abombss|LINK +1 Ben, the real benefit with MVC is the separation you will be able to achieve between your views and your application. You can build a very functional and testable application ensuring all business requirements are met outside of the presentation where it is much more difficult to test. As for sample apps, MVC is very new. The first CTP came out about a week ago somost of what you will see are small demos and bits of functionality. Have a look at some of the monorail samples as the architecture is very similar to MS MVC. Many things are different but it should at least give you an idea of how the controllers, views, and models work. Adam Dec 17, 2007 10:42 AM|onlinedev|LINK Thanks for your suggestions BEN!! We have already decided to release the developments using agile approach to have more control over the development and the success of the project. Since, we have number of customers and they agree to perform the testing on the small releases and give us the feedback parallely. Now, about the sample you have provided, I could not see the working sample over there. Another thing, I want to highlight is, I am looking for a simple project explaining the MVC architecture not the complex one. Regards, onlinedev Member 230 Points ASPInsiders Dec 17, 2007 10:46 AM|subdigital|LINK Check out Monorail as there are more samples out there for that platform. The concepts are almost identical, so they transfer over easily. Have you checked out ScottGu's detailed posts on the topic? That's the best intro you could ever ask to get :) Dec 17, 2007 11:53 AM|onlinedev|LINK Hi Ben, I have tried a lot to find out ScottGu's detailed posts with the sample project, but could not find appropriate one. If possible, please send me few links of his posts or any other links which can explain my requirements. Regards, onlinedev Dec 17, 2007 01:52 PM|foobar123|LINK Dec 17, 2007 03:52 PM|onlinedev|LINK Hi Ben & all Techies, Thanks for providing the good links to know about the MVC architecture. I need an expert advice on this that whether I should use MVC architecture or not in my enterprise level application development. As I could see that the time required for the development using MVC architecture would be at least double than using the traditional n-tier architecture . What I saw on the sample application / example provided by scottgu is : he has written lot of code for simply displaying product, adding and editing. Where If I see my code for the simple add, edit, delete and save operations code then it is quite less as compared to the scottgu example. Since, my design contains Presentation Layer, BLL & DAL and itself is the seperation of the layers then why should I use MVC architecutre. I am still unable to figure out the benefit of the MVC architecture over the traditional n-tier architecture having presentation, BLL and DAL sepearted from each other. Please suggest, if my point of view on MVC architecture is different or say wrong. Regards, Onlinedev Dec 17, 2007 04:14 PM|abombss|LINK I may be going out on limb here, but I am guessing you have lots of code in the code-behind of your regular aspx pages? If you do, how are you testing that? How are you sure you do not have business logic leaking in code behind? Simple scenarios like coloring a table row if a value is negative is pretty complicated and can only be done in code-behind. In mvc things like this are very trivial. I don't want to write code for presentation, I want to write views and templates. Lines of Code, to me is one of the worst metrics for assessing if a technology or platform is the right one or not. Just because its less code does not mean its easier to maintain or extend. If you are happy with webforms stick with that. I haven't used webforms by choice in over 18 months because I found them to just be too difficult to work with. I found myself always fighting the framework to make things I wanted to work. I also wanted real lightweight ajax without dealing with postback and full page renders, etc. Again, back to testability, I also struggled with testing my webform code, and over time I always started to leak things into my code behind that should not be there. MVC is the right approach for me. Obviously loc is a factor but testability, maintainability, solubility, loose coupling, are more important in my book and I have found MVC frameworks much better at achieving those goals than webforms. With tools like ReSharper I worry less and less about lines of code as I can have templates, snippets, and total control over refactoring. Just my .02 Dec 17, 2007 05:55 PM|jcteague|LINK I concur with Adam's sentiments. My only addition would be that the biggest time drain in development is not typing, it's time in the debugger, Especially when you have to debug a code-behind file. You have to fire up the asp,net runtime, click to the page / method you want to tests. With a fully tested Controller you will spend much less time in the debugger. When you do need to debug, you set your breakpoint in the test and it will load up much faster and right to the method in question. John Dec 17, 2007 06:08 PM|abombss|LINK +1 John, I generally say testability without attributing what the real benefits are. Less time debugging is by far the biggest time saver. And debugging only the broken test instead of firing up the whole web app and stepping the page is certainly faster. Dec 17, 2007 07:59 PM|shinakuma|LINK The truth is that no one can give you a comprehensive 5 minute crash course on good enterprise architecture and design. And it's not something you can pickup just by reading through a small sample app. Like many have said, there's a lot more to it than Presentation -> BLL -> DAL and line count. You need to study a few books on the subject of architecture, design and unit testing. Also given the project scope, it's pretty risky to assume you can take a team of PB developers who may or may not have stateless web experience, and be successful in a new platform. You need to bring in someone with expertise and experience beyond ObjectDataSource and GridView. Dec 18, 2007 11:33 AM|onlinedev|LINK Hi ShinaKuma, Offcourse, we need to have expertise and experienced team with us for such a big application development. I hope I am talking to the experienced people here, its a part of the whole show dear. Its just a start to get the right direction and put efforts towards that only. Anyways, please share if you know something or want to answer my query instead of stopping / closing this post / query so that in future others can also take advanatge of it. Regards, onlinedev Dec 18, 2007 02:55 PM|onlinedev|LINK I am still working to get hold on the MVC architecture and could find out one good article on the MSDN ... Title : Implementing Model-View-Controller in ASP.NET Now, after reading this article, I could understand that if you keep your actual business logic and data access functions seperate from the view then you have actually achieved the MVC architecture. In this article, the MVC architecture based page is divided into following parts (in ASP.NET) 1. solution.aspx (contains all the controls used to view the data e.g. listview, button etc.) 2. solution.vb (a code behind file, respond to the user actions and interact with the BLL) 3. BLL.vb (a BLL file, which has all the business logic written and make calls to the DAL to access the data) 4. DAL.vb (a DAL file, fetch data from the Database(XML, Sql Server or Oracle) and sends back to the BLL) Now, if we see here, seperation is preety good and resuability can be easily achieved. My Question: What I see here is solution.aspx and solution.vb files are still tightly coupled and all the front validations are written in the solution.vb ( code behind) file. I am still unable to get a feel of true MVC architecture with the approch mentioned by microsoft. Because, there is a lot of dependency in between these 2 files (.aspx and .vb). Please share your knowledge to make me understand that how to achieve the true MVC architecture or let me know in case this is a correct approch to achieve the MVC architecture using ASP.Net. Regards, onlinedev Dec 18, 2007 03:48 PM|abombss|LINK As for terminology I tend not to use the old DAL and BLL names. I find it much easier talk about things using the vocabulary from the problem domain. Typically I have a dll which is my Domain / Model, That dll has all the entities, value objects, and domain services to support the application. The DAL still exists but it is abstracted away from my application using Repositories. Repositories give me domain specific language to find, add, and remove the entities from my application. In reality they are just calling back to the DAL, in my case I use NHibernate to handle all that. I also tend to use Specifications to define my queries in a domain specific language. Instead of polluting my Repositories with OrderRepo.FindById and OrderRepo.FindForCustomer and OrderRepo.FindNotShipped, I have a specification new OrdersThatHaveNotShipped.For(customerX) which encapsulates all the logic for querying through the data acess layer. Here is brief example of what some of my controllers may look like. public class OrdersController : Controller { ISecurityGaurd _securityGaurd; IOrderValidator _orderValidator; IOrderRepository _repository; public OrdersController(ISecurityGuard securityGaurd, IOrderValidator orderValidator, IOrderRepository repository) { _securityGaurd = securityGaurd; _orderValidator = orderValidator; _repository = repository; } public void Index() { RenderView("Index", new {Orders = _repository.FindAllOrders()}); } public void Index(Customer customer) { IOrderSpecification spec = Specifications.For.Order(); spec.ForCustomer(customer); RenderView("Index", new {Orders = _repository.Find(spec), Customer = customer}); } public void Create(Customer customer, Order order) { ValidationErrors errors = new ValidationErrors(); if(!_securityGuard.CanCustomerCreateNewOrders(customer)) throw new SecurityException(); if(!_orderValidator.IsValidForCreation(order, errors)) { TempData["errors"] = errors; TempData["order"] = order; Redirect(c => c.New()); } else { _repository.Save(order); Redirect(c => c.Index(customer)); } } } Notice there is very little logic. Everything really happens by calling into services from the Model. I use IoC to build my controllers so all those dependencies on services are injected automatically. This gives me decent separation of concerns, my controller is light delegating back to the model / domain for work. My controller is easily testable using mocks. I can inject my controller with Mock objects and ensure that it is redirecting to the right places, ensure when its invalid its not saving, and ensure when it is valid it is saving, and ensure the correct view is being rendered, and ensure the correct data is being set in the ViewData. Notice how none of those tests mentioned test any of the application or business logic, just the controller. The service implementations have their own set of tests. Because OrderValidator has its own set of tests and they all pass, I can trust that I will get the desired behavior when it is injected into the controller. My controller tests do not need to worry about that. The views now are very simple, I just grab the Orders enumerable and the Customer properties out of the ViewData and my .aspx.cs has zero code behind and the .aspx is just <%= %> calls to write out data from the ViewData. So the view for Order.New simply has a regular html form to build the order and that form submits to Order.Create when its done we take them back to the Order.Index page showing a list of orders for that particular customer. I know its trivial and missing a lot but hopefully this helps clarify a little bit about the differences between mvc and webforms. Could you achieve something like this in webforms, pretty close if you use an MVP (Model-View-Presenter) but you will always fight the framework. Asp.Net Webforms is a totally different model, instead of have a Order.Create action specific for handling that scenario. You subscribe to a CreateOrder button click event, lots of things happen in the background to ensure your ButtonClick event is raised, the entire page is posted back with viewstate which is ugly and slow. And every now then you will run into issues with events not bubbling up or values of TextBoxes not being what you expected and these things are very slow and difficult to debug, and nearly impossible to write tests for. Dec 18, 2007 08:10 PM|robconery|LINK I wrote a blog post on MVC and architecture: Dec 19, 2007 07:54 AM|onlinedev|LINK Hi rob and abombss, Thanks for providing such a nice information / samples on the MVC architecture using ASP.Net. One more thing, Please tell me whether we can use the server controls (GridView, Formview, Objectdatasource etc.) or any other third party controls like Infragistics or ComponentOne in the MVC based architecture application or not. Regards, Onlinedev Dec 19, 2007 02:47 PM|foobar123|LINK -- onlinedev > if you keep your actual business logic and data access functions seperate > from the view then you have actually achieved the MVC architecture. Not quite. The Web Client Software Factory (WCSF) attempts to lighten the burden that classic web-form programming imposes on the view, but it is still MVP, albeit with a twist. You can read more about WCSF on this page: If you're tempted to explore WCSF, two pieces of advice: 1. It doesn't work in VS2k8. 2. Make sure your VS2k5 has both C# and VB installed before you install WCSF. Otherwise you will have to take post-installation corrective steps that aren't well-documented. -- onlinedev > tell me whether we can use the server controls (GridView, Formview, > Objectdatasource etc.) or any other third party controls I don't know your third-party controls. In all likelihood, you won't be able to use many AspNet built-in controls. They inject funky JavaScript, emit funky identifiers, perform funky postbacks and utilize funky <form runat="server">. Dec 19, 2007 03:03 PM|abombss|LINK foobar123I don't know your third-party controls. In all likelihood, you won't be able to use many AspNet built-in controls. They inject funky JavaScript, emit funky identifiers, perform funky postbacks and utilize funky <form runat="server">. 3rd party controls are funky alright. The nice thing is you have pure html to deal with now so many of those controls are pointless. They were really built so you could use postback and subscribe to events. Now all you need are one of the dozens of oss javascript toolkits and you get all that stuff for free with awesome communities, tons of plugins, examples everywhere, and regular updates. My favorites are jquery and extjs but there are several more including prototype, dojo, yui MVC javascript 18 replies Last post Dec 19, 2007 03:03 PM by abombss
https://forums.asp.net/p/1195225/2064742.aspx?Re+MVC+Architecture+Sample
CC-MAIN-2020-40
refinedweb
3,019
61.77
Error occurs after several seconds of operation. Code is very simple. import sensor img = sensor.snapshot() while(True): img = sensor.snapshot() Factory reset did not help. M7 version, firmware 2.60 latest. Runtime error: sensor timeout! Discussion related to "under the hood" OpenMV topics. 2 posts • Page 1 of 1 - iabdalkader - Posts: 542 - Joined: Sun May 24, 2015 3:53 pm Re: Runtime error: sensor timeout! Hi, you need to call sensor.reset() and set the frame size and format before doing anything else. See the helloworld.py example. 2 posts • Page 1 of 1 Return to “Technical Discussion” Who is online Users browsing this forum: Google [Bot] and 3 guests
http://forums.openmv.io/viewtopic.php?f=6&t=571&sid=15ae34296315f881db41e6be31e15575
CC-MAIN-2018-43
refinedweb
111
63.36
Noise: Creating a Synthesizer for Retro Sound Effects - Audio Processors This Cyber Monday Tuts+ courses will be reduced to just $3 (usually $15). Don't miss out. This is the final part in our series of tutorials about creating a synthesizer-based audio engine that can be used to generate sounds for retro-styled games. The audio engine can generate all of the sounds at runtime without the need for any external dependencies such as MP3 files or WAV files. In this tutorial, we'll add support for audio processors, and code a delay processor which can add a decaying echo effect to our sounds. If you have not already read the first tutorial or the second Processor Demo In this final tutorial we will be adding audio processors to the core engine and creating a simple delay processor. The following demonstration shows the delay processor in action: Only one sound is being played in that demonstration but the frequency of the sound is being randomised, and the audio samples generated by the engine are being pushed through a delay processor, which gives it the decaying echo effect. AudioProcessor Class The first thing we need to do is create a base class for the audio processors: package noise { public class AudioProcessor { // public var enabled:Boolean = true; // public function AudioProcessor() { if( Object(this).constructor == AudioProcessor ) { throw new Error( "AudioProcessor class must be extended" ); } } // internal function process( samples:Vector.<Number> ):void {} } } As you can see, the class is very simple; it contains an internal process() method that is invoked by the AudioEngine class whenever any samples need to be processed, and a public enabled property that can be used to turn the processor on and off. AudioDelay Class The AudioDelay class is the class that actually creates the audio delay, and it extends the AudioProcessor class. Here is the basic empty class that we will work with: package noise { public class AudioDelay extends AudioProcessor { // public function AudioDelay( time:Number = 0.5 ) { this.time = time; } } } The time argument passed to the class constructor is the time (in seconds) of the delay tap - that is, the amount of time between each audio delay. Now let's add the private properties: private var m_buffer:Vector.<Number> = new Vector.<Number>(); private var m_bufferSize:int = 0; private var m_bufferIndex:int = 0; private var m_time:Number = 0.0; private var m_gain:Number = 0.8; The m_buffer vector is basically a feedback loop: it contains all of the audio samples passed to the process method, and those samples are modified (in this case reduced in amplitude) continuously as the m_bufferIndex passes through the buffer. This will make sense when we get to the process() method. The m_bufferSize and m_bufferIndex properties are used to keep track of the buffer's state. The m_time property is the time of the delay tap, in seconds. The m_gain property is a multiplier that is used to reduce the amplitude of the buffered audio samples over time. This class only has one method, and that is the internal process() method which overrides the process() method in the AudioProcessor class: internal override function process( samples:Vector.<Number> ):void { var i:int = 0; var n:int = samples.length; var v:Number = 0.0; // while( i < n ) { v = m_buffer[m_bufferIndex]; // grab a buffered sample v *= m_gain; // reduce the amplitude v += samples[i]; // add the fresh sample // m_buffer[m_bufferIndex] = v; m_bufferIndex++; // if( m_bufferIndex == m_bufferSize ) { m_bufferIndex = 0; } // samples[i] = v; i++; } } Finally, we need to add the getters/setters for the private m_time and m_gain properties: public function get time():Number { return m_time; } public function set time( value:Number ):void { // clamp the time to the range 0.0001 - 8.0 value = value < 0.0001 ? 0.0001 : value > 8.0 ? 8.0 : value; // no need to modify the buffer size if the time has not changed if( m_time == value ) { return; } // set the time m_time = value; // update the buffer size m_bufferSize = Math.floor( 44100 * m_time ); m_buffer.length = m_bufferSize; } public function get gain():Number { return m_gain; } public function set gain( value:Number ):void { // clamp the gain to the range 0.0 - 1.0 m_gain = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value; } Believe or not, that is the AudioDelay class completed. Audio delays are actually very easy once you understand how the feedback loop (the m_buffer property) works. Updating the AudioEngine Class The final thing we need to do is update the AudioEngine class so audio processors can be added to it. First of all let's add a vector to store the audio processor instances: static private var m_processorList:Vector.<AudioProcessor> = new Vector.<AudioProcessor>(); To actually add and remove processors to and from the AudioEngine class we will use two public methods: AudioEngine.addProcessor() static public function addProcessor( processor:AudioProcessor ):void { if( m_processorList.indexOf( processor ) == -1 ) { m_processorList.push( processor ); } } AudioEngine.removeProcessor() static public function removeProcessor( processor:AudioProcessor ):void { var i:int = m_processorList.indexOf( processor ); if( i != -1 ) { m_processorList.splice( i, 1 ); } } Easy enough - all those methods are doing is adding and removing AudioProcessor instances to or from the m_processorList vector. The last method that we will add rolls through the list of audio processors and, if the processor is enabled, passes audio samples to the processor's process() method: static private function processSamples():void { var i:int = 0; var n:int = m_processorList.length; // while( i < n ) { if( m_processorList[i].enabled ) { m_processorList[i].process( m_sampleList ); } i++; } } Now it is time to add the final bit of code, and this is a single line of code that needs to be added to the private onSampleData() method in the AudioEngine class: if( m_soundChannel == null ) { while( i < n ) { b.writeFloat( 0.0 ); b.writeFloat( 0.0 ); i++; } return; } // generateSamples(); processSamples(); // while( i < n ) { s = m_sampleList[i] * m_amplitude; b.writeFloat( s ); b.writeFloat( s ); m_sampleList[i] = 0.0; i++; } The highlighted line of code is the one that needs to be added to the class; it simply invokes the processSamples() method that we previously added. Conclusion That, as they say, is that. In the first tutorial we took a look at various waveforms and how sound waves are stored digitally, then we constructed the core audio engine code in the second tutorial, and now we have wrapped things up with the addition of audio processors. There is a lot more that could be done with this code, or to a variation of this code, but the important thing to bear in mind is the amount of work an audio engine has to do at runtime. If you push an audio engine too far (and that is easy to do) then the overall performance of your game may suffer as a consequence - even if you move an audio engine into its own thread (or ActionScript 3.0 worker) it will still happily bite chunks out of the CPU if you are not careful. However, a lot of professional and not-so-professional games do a lot of audio processing at runtime because having dynamic sound effects and music in a game can add a lot to the overall experience, and it can draw the player deeper into the game world. The audio engine we put together in this series of tutorials could just as easily work with regular (non-generated) sound effect samples loaded from files: essentially all digital audio is a sequence of samples in its most basic form. One final thing to think about: audio is a very important aspect of game design, it is just as important and powerful as the visual side of things, and is not something that should be thrown together or bolted onto a game at the last minute of development if you really care about the production quality of your games. Take your time with the audio design for your games and you will reap the rewards. I hope you enjoyed this series of tutorials and can take something positive away from it: even if you just think about the audio in your games a little more from now on then I have done my job. All of the audio engine source code is available in the source download. Have fun!
http://gamedevelopment.tutsplus.com/tutorials/noise-creating-a-synthesizer-for-retro-sound-effects-audio-processors--gamedev-1538
CC-MAIN-2014-49
refinedweb
1,355
59.53
>> be careful about String concatenation (+) operator in loops using Java? Get your Java dream job! Beginners interview preparation 85 Lectures 6 hours Core Java bootcamp program with Hands on practice 99 Lectures 17 hours Strings are used to store a sequence of characters in Java, they are treated as objects. The String class of the java.lang package represents a String. You can create a String either by using the new keyword (like any other object) or, by assigning value to the literal (like any other primitive datatype). Public class Sample{ Public static void main(String args[]){ String str1 = "Hello"; String str2 = "how are you"; } } Strings are immutable in Java i.e. once you create a String literal it cannot be modified. Storage Since all the String values we define are objects of the String class they are stored on the heap area. But, unlike other objects in a separate memory location known as String Constant pool is allotted for the String objects. Whenever you define a String value JVM creates a String object with the given value in the String constant pool. Therefore, if you run the above program two String values are created in the String constant pool. Concatenation of two Strings If you try to concatenate these two String values as − str1 = str2 + str2; Since Strings are immutable in java, instead of modifying str1 a new (intermediate) String object is created with the concatenated value and it is assigned to the reference str1. If you concatenate Stings in loops for each iteration a new intermediate object is created in the String constant pool. This is not recommended as it causes memory issues. Therefore, concatenating strings in loops as shown in the following example is not recommended. Example public class StringExample { public static void main(String args[]) { String stringArray[] = {"Java", "JavaFX", "HBase", "Oracle"}; String singleString = new String(); for (int i=0; i<stringArray.length; i++) { singleString = singleString+stringArray[i]+" "; } System.out.println(singleString); } } Output Java JavaFX HBase Oracle Example If you have a scenario to add String values in loops it is recommended to use StringBuilder instead of Strings − public class StringExample { public static void main(String args[]) { String stringArray[] = {"Java", "JavaFX", "HBase", "Oracle"}; StringBuilder singleString = new StringBuilder(); for (int i=0; i<stringArray.length; i++) { singleString.append(stringArray[i]); singleString.append(" "); } System.out.println(singleString); } } Output Java JavaFX HBase Oracle - Related Questions & Answers - Why Should You Care About Machine Learning? - String Concatenation by + (string concatenation) operator. - What you should know about VPN passwords? - Do you think IPL should be banned? Yes/No….Why? - Why should you tame the superhero inside you? - Why You Should Learn Python Programming? - Why You Should Update Your Apps - String Concatenation in Java - MySQL concatenation operator? - Why should public wifi be avoided? - 10 Reasons why you should Learn Python - Why You Should Update Your Operating System - Why You Should Use an Anonymous Proxy Server? - Why You Should Never Pay a Ransomware Demand - Why we should use whole string in Java regular expression
https://www.tutorialspoint.com/why-should-you-be-careful-about-string-concatenation-plus-operator-in-loops-using-java
CC-MAIN-2022-40
refinedweb
499
55.44
On Fri, 30 Nov 2007 12:58:06 +0530Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> wrote:> Andrew Morton wrote:> > On Thu, 29 Nov 2007 23:00:47 -0800 Andrew Morton <akpm@linux-foundation.org> wrote:> > > >> On Fri, 30 Nov 2007 01:39:29 -0500 Kyle McMartin <kyle@mcmartin.ca> wrote:> >>> >>> On Thu, Nov 29, 2007 at 12:35:33AM -0800, Andrew Morton wrote:> >>>> ten million is close enough to infinity for me to assume that we broke the> >>>> driver and that's never going to terminate.> >>>>> >>> how about this? doesn't break things on my pa8800:> >>>> >>> diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.c b/drivers/scsi/sym53c8xx_2/sym_hipd.c> >>> index 463f119..ef01cb1 100644> >>> --- a/drivers/scsi/sym53c8xx_2/sym_hipd.c> >>> +++ b/drivers/scsi/sym53c8xx_2/sym_hipd.c> >>> @@ -1037,10 +1037,13 @@ restart_test:> >>> /*> >>> * Wait 'til done (with timeout)> >>> */> >>> - for (i=0; i<SYM_SNOOP_TIMEOUT; i++)> >>> + do { > >>> if (INB(np, nc_istat) & (INTF|SIP|DIP))> >>> break;> >>> - if (i>=SYM_SNOOP_TIMEOUT) {> >>> + msleep(10);> >>> + } while (i++ < SYM_SNOOP_TIMEOUT);> >>> +> >>> + if (i >= SYM_SNOOP_TIMEOUT) {> >>> printf ("CACHE TEST FAILED: timeout.\n");> >>> return (0x20);> >>> }> >>> diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.h b/drivers/scsi/sym53c8xx_2/sym_hipd.h> >>> index ad07880..85c483b 100644> >>> --- a/drivers/scsi/sym53c8xx_2/sym_hipd.h> >>> +++ b/drivers/scsi/sym53c8xx_2/sym_hipd.h> >>> @@ -339,7 +339,7 @@> >>> /*> >>> * Misc.> >>> */> >>> -#define SYM_SNOOP_TIMEOUT (10000000)> >>> +#define SYM_SNOOP_TIMEOUT (1000)> >>> #define BUS_8_BIT 0> >>> #define BUS_16_BIT 1> >>> > >> That might be the fix, but do we know what we're actually fixing? afaik> >> 2.6.24-rc3 doesn't get this timeout, 2.6.24-rc3-mm2 does get it and we> >> don't know why?> >>> > > > <looks at Subject:>> > > > <Checks that Rafael was cc'ed>> > > > So 2.6.24-rc3 was OK and 2.6.24-rc3-git2 is not?> > Yes, the 2.6.24-rc3 was Ok and this is seen from 2.6.24-rc3-git2/3/4.> There are effectively no drivers/scsi/ changes after 2.6.24-rc3 and wedon't (I believe) have a clue what caused this regression.Can you please do a bisection search on this?Thanks.
http://lkml.org/lkml/2007/12/3/208
CC-MAIN-2013-48
refinedweb
333
70.39
Posted 05 May 2017 Link to this post I got this problem in mvc 5. Webconfig under /views contain <namespaces> <add namespace="System.Web.Mvc" /> <add namespace="System.Web.Mvc.Ajax" /> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Optimization"/> <add namespace="System.Web.Routing" /> <add namespace="Kendo.Mvc.UI" /> </namespaces> View got @using Kendo.Mvc.UI at the top Kendo.MVC ar in references but still this is a errormessage: htmlhelper does not contain a definition for kendo What to do? Posted 10 May 2017 Link to this post Posted 08 Apr 2018 Link to this post Posted 09 Apr 2018 Link to this post Posted 09 Apr 2018 in reply to Stefan Link to this post This is the error I am getting still now.Please tell me how to solve it? Posted 12 Apr 2018 Link to this post public class TaskViewModel : IGanttTask { DependencyViewModel : IGanttDependency Posted 09 Jun in reply to Andrew Link to this post Posted 12 Jun Link to this post
https://www.telerik.com/forums/kendo-ui-and-mvc-starting-a-new-project
CC-MAIN-2019-47
refinedweb
168
68.67
Talk:Tag:historic=highwater mark From OpenStreetMap Wiki what tag to use? i can see pages with high water mark or flood mark tag with good combination tag to use but not on this page. We need to find what tag to use. -Yod4z (talk) 17:13, 20 March 2017 (UTC) - Do you mean flood_mark=* (216 uses) and high_water_mark=* (20 uses)? I like more historic=highwater_mark (38 uses) as it is in historic namespace. But the docs here would need to be improved, which I can do, if we decide to prefer this tag. Chrabros (talk) 02:41, 21 March 2017 (UTC) - Regarding value (term) I prefer flood_markas simpler, shorter and easier to understand for non-hydrologists. We have started also with high_water_mark, but found this term ambiguous. High-water mark is more popular in US, while flood mark in EU (). - Agree, historic namespace is better. But we can't mix historic=flood_mark with memorial:type=*, as it requires historic=memorial. So I propose to duplicate selected memorial:type=*, and create flood_mark:type=* with additional values, like paintedfor painted directly (on walls, ...). - Deprecated OSM wiki pages should be deleted or just redirected? We have ongoing flood project, so after final agreement can work on that. --Cracert (talk) 11:10, 23 March 2017 (UTC) - maybe add info like on page amenity=fire_hydrant to page to don't use anymore. I like the name flood mark, it's clear and comprehensive in any language --Yod4z (talk) 16:50, 23 March 2017 (UTC)
https://wiki.openstreetmap.org/wiki/Talk:Tag:historic%3Dhighwater_mark
CC-MAIN-2018-17
refinedweb
250
65.32
Home › Forum › Community Help › Using SOFA › How to change the texture of OglModel at runtime? Tagged: 64_bits, change texture, runtime, SOFA_1712, Windows_10 This topic contains 5 replies, has 2 voices, and was last updated by Hugo 2 months ago. - AuthorPosts Hello everyone, I want to change the texture of OglModel using PNG at runtime but I do not know how to do it. Could anyone give me some help? Wong I am fine with helping you, but you should learn coding in Python since it is extremely useful in SOFA. Here is what I would write to test if it works (do not forget to use the PythonScriptController in your xml scene). XML (inspired from the example examples/Components/visualmodel/OglModel.scn) <?xml version="1.0"?> <Node name="root" dt="0.01"> <RequiredPlugin name="SofaPython" pluginName="SofaPython" /> <PythonScriptController filename="model.py" classname="ChangeTexture" listening="true"/> <OglModel name="VisualModel" fileMesh="mesh/raptor_8kp.obj" texturename="/data/Softwares/sofa/src/master/share/textures/bubble_skin.bmp" translation="30 0 0" useNormals="0"/> <MeshObjLoader name="loader" filename="mesh/raptor_8kp.obj" translation="0 0 0"/> <OglModel name="VisualModel2" src="@loader" color="red"/> </Node> Python: import sys import Sofa class ChangeTexture(Sofa.PythonScriptController): def initGraph(self,node): print 'initGraph called (python side)' self.VM=node.getObject('VisualModel') return 0 def onKeyPressed(self,k): if k=="M" : self.VM.findData('texturename').value='/data/Softwares/sofa/src/master/share/textures/brushed_metal.bmp' self.VM.init() self.VM.initVisual() return 0 Dear @outtt, do you know whether GDUT would be ready to support the open-source project SOFA? Many ways are available to support the open-source initiative : donations, memberships, projects. And many non-profit ways also exist like feature contributions, communication about your plugins (conferences, youtube, linkedin, twitter) and setup collaborations with the community! Best, Hugo Thank you for your reply hugo. I once tried to use the similar code but it did not work because I omitted the initVisual() function! By the way, GDUT is my College that I once attended, but now I have graduated. However, I am interested about the memberships. So can and how can I join in? But I still need more help because Sofa is still difficult for me, you know. So can I communicate with other developers if I join in? Wong Dear Wong, I will tell you more in details about the memberships in a private email. But most information is available online. Regarding developer room, note that we recently setup a Gitter room. This room is dedicated to developers, for development questions. So if you have such developer questions, do not hesitate to join. For user question, the forum is the right place to be. Note that scripting in Python is not really considered as SOFA development. Best regards, Hugo - AuthorPosts You must be logged in to reply to this topic.
https://www.sofa-framework.org/community/forum/topic/how-to-change-the-texture-of-oglmodel-at-runtime/
CC-MAIN-2019-13
refinedweb
471
50.23
digitalmars.D.learn - Quirks of 'alias foo this' - Sean Cavanaugh <WorksOnMyMachine gmail.com> Apr 25 2011 - Andrej Mitrovic <andrej.mitrovich gmail.com> Apr 25 2011 - Andrej Mitrovic <andrej.mitrovich gmail.com> Apr 25 2011 So my research into making a nice friendly to use COM interface wrapper for D has had a few odd turns and I am wondering if there is an answer to making the implementation nicer. I discovered the 'alias foo this' syntax to let structs nearly seamlessly impersonate a member variable. This has turned out to solve most of my original need to wrap the functions, but it is imperfect. The main problem with 'alias foo this' is that I am having is that I can't find a way to catch code reading the aliased variable, in cases of assignment or implicit conversion to foo type. I can catch writes just fine with opAssign, but finding a way to overload the reads have me stumped. I did some experiments with wraping the methods with some mixin templates, but using 'alias foo this' is about 100% more useful, intuitive and 99.9% less code to write :) Examples (The fully ComPtr code is down further): // initializes to null by default ComPtr!(ID3D11Device) device; ComPtr!(ID3D11Device) otherdevice; // The 'device' argument to D3D11CreateDevice is implemented as // 'out ID3D11Device', and uses the 'alias p this' feature to // auto-magically write directly into device.p; Ideally // I could hook this and either call SafeRelease here or assert // that the p variable is null before being written to. // This also represents the a case that you can write to the // struct without detecting it. HRESULT rslt = D3D11CreateDevice( null, D3D11_DRIVER_TYPE.HARDWARE, null, 0 | D3D11_CREATE_DEVICE.DEBUG, null, 0, D3D11_SDK_VERSION, device, &featureLevel, null); // post-blit case, works otherdevice = device; // gives me a copy of 'p' due to 'alias p this' ID3D11Device rawdevice = device; // assignment back the other direction is caught by opAssign // this is also the code path used if there are multiple COM // interfaces in the hierarchy (IUnknown->ID3D11Resource->ID3D11Texture) // and post-blit isn't used because the types are different. device = rawdevice; My current version of ComPtr: struct ComPtr(T) { public: static assert(is(T : std.c.windows.com.IUnknown) || is(T : win32.unknwn.IUnknown)); T p; alias p this; private: this(T inPtr) { p = inPtr; } public: this(this) { if (p !is null) { p.AddRef(); } } ~this() { SafeRelease(); } // Attach and Detach set/unset the pointer without messing with the refcount (unlike opAssign assignment) void Attach(T other) { SafeRelease(); p = other; } T Detach() { T rval = p; p = null; return rval; } const bool HasData() { return (p !is null); } void opAssign(T other) { if (other !is null) { other.AddRef(); SafeRelease(); p = other; } else { SafeRelease(); } } void SafeRelease() { if (p !is null) { p.Release(); p = null; } } } Apr 25 2011 I have a similar problem. :) I'd like to catch a call to a COM object to check if it was properly initialized first. Here's one way to do it with regular classes: You can read about whitehole here: I think the above example could be used with COM objects but I haven't tried yet. Apr 25 2011
http://www.digitalmars.com/d/archives/digitalmars/D/learn/Quirks_of_alias_foo_this_26507.html
CC-MAIN-2014-35
refinedweb
520
63.7
Re: Datasets fundamental - From: "Newkid" <shyamspandey@xxxxxxxxx> - Date: 12 Jun 2006 00:10:54 -0700 Hello Gregory! On the top, thanks for showing some lights on various options. As I said earlier, I don't have any options. I need to work with the client the way they want. It means, as you explained quite well in your reply, It's going to be pain in my ***. As far as transactions failure or updation failure are concern, we both party agreed to send a batch code with each update query. Client application has to return the same batch code in abt 10-15 mins when it received it. If he does, databases will be updated and if not, one has to add the same block again with the next transaction with new batch code. This is the basic logic behind the work. The present bottle neck for me is to get the data from the dataset. I'm not getting any help from the client end and yet to figure it out how to do and why it's not coming? Thank you so much for your precious time and Sorry to get back you little late due to bad health Sam Cowboy (Gregory A. Beamer) wrote: I will have to run back through this post later, but let's see if I get the basic gist of what is going on. The web site is using data disconnected in the form of a DataSet. You are loading from the Database into this "cache" and you want to update on a regular schedule. The issue you are going to have is syncing up the data in memory with the database. This means either a) locking the app out, updating and refreshing, which could lock people out permanently or b) taking a chance of losing data during the sync. Neither is particularly appealing to me, unless this is extremely fast (with only 65k records, it is probably not a big deal. Depending on traffic, another option is using SQL 2005 Express and having it updated. SQL has a transaction log, which means you can update while activity is in progress without danger of sync issues. SQL Express is free and if you are using ASP.NET 2.0 you can create the file base version on the ASP.NET site without having to deploy the server (note that server software will have to exist on the ASP.NET server). This may not be an option for you, but it is safer. You then move from a custom "cache" solution to a normal replication type of scenario. You can update through replication, for MS SQL, or use Service Broker if the connection will be broken from time to time. You will have to customize the transfer to Sybase or MySQL, but there are plenty of ways to do that from Database to Database. Not sure if these are options for you, but the "cache" version means you have to most likely lock and/or use a rather chatty syncing to ensure no data is lost. It can be wrapped in a transaction type mechanism, but rollback will be a pain. What you have to determine is how fast the database refresh is and whether reloading a DataSet is an option based on traffic. If you must go to cache, I would wrap cache with your own object and pull the new data in first, then attempt to refresh the database and then make sure the new dataset, now serving requests is in sync. The danger point is the second and third step. If this is really fast, you can lock the data and run the cycle, which is the safest. The safest way I know of to do this is to wrap the cache and use multiple threads with a mutex (mutually exclusive) so two ops cannot act on the data at one time. The danger here is how the system recovers if the update fails. If SQL Express is an option, use it, as the hard part is already solved for you. Working with an in memory database means you have to write all of the infrastructure. The longer the update takes, the more complex your code will likely get. Hope this helps. -- Gregory A. Beamer ************************************************* Think Outside the Box! ************************************************* "Newkid" <shyamspandey@xxxxxxxxx> wrote in message news:1149747295.787354.33110@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Hello Gregory! Thanks a ton for wonderful explainations! First let me allow to brief you about the work. The work which is assigned to me is something like client has tool that pushes the records (products which client want to sell online from his 65k products list) in a dataset onyo a webserver cache. What I needs to do is take the records from the datasets and update three different database(Sysbase / MY SQL / MS SQL) at a particular time frame, say after every half an hour. The current e-commerce website is in PHP / MYSQL. Now, please find my answer below your queries: a)? Yes, you're correct. I'm refering to ASP.NET Caching.Well, I'm baised to use the ASP.NET caching and read records from the cache. The person who has coded the service at client end is forcing us to consume their service in the same way despite of our recommendations. I would like to know a basic question here; What my understading is a dataset can contains complete inventory, if it's the case how one can retrieve records from a given table. In other words, Is it possible to retrive the information of one particular field of a table from a dataset? if yes, how?. Hope, I'm making sense. c) > This is a "it depends" answer, as you never know when garbage collection clears things out. From the standpoint of access, unless you cache them or stick them somewhere, they are gone after they are passed on. Okay. Any default time frame?. I'm asking this because the person is sitting at client end is not responding et all. He always said that no one has faced the issue so far (approx 1500 installations) except us. And I'm not able to read the records from cache by all way serialization / deserializations / System.Web.Caching namespace and what not. I don't know how to proceed and how to check whether he is correct or not.? If you use the standard ADO.NET methods, you link up the SELECT, INSERT, UPDATE and DELETE queries to your DataAdapter and call Update(). As long as you either have a query or a stored procedure (I think MySQL has this in its latest incarnation). Would you please kind enough to explain it with example or redirect me to a page where I can read more about them? Once again, thank you so much for your time and support. Shyam Cowboy (Gregory A. Beamer) wrote: Answers inline with queries. <shyamspandey@xxxxxxxxx> wrote in message news:1149677795.725421.249110@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Hi there! I'm very new to dot net envirnoment and have little bit understanding. Due to some reason, I need to work straight forward on datasets and webservices and xml. I don't have much hold on the same but would appreciate if someone put lights on the given questions: a) How datasets which has passed by value can be retreived from cache?? b) How would one know whether datasets is containing any records or not? Is there any way to test the same? Please let me know via coding. Single table dataset: if(dataSet.Tables[0].Rows.Count != 0) { //Do something, as there are records } c) How long datasets will store in server memory? This is a "it depends" answer, as you never know when garbage collection clears things out. From the standpoint of access, unless you cache them or stick them somewhere, they are gone after they are passed on. d) How to create a dataset and publish it in cache? For a web service, you can cache with Cache["nameOfDataSet"] = ds; I am still not sure what you are caching, or where ... much less why. Beside this, I do have other questions which will not fall into this group like : 1) How to map datasets records to update a MYSQL Server? If you use the standard ADO.NET methods, you link up the SELECT, INSERT, UPDATE and DELETE queries to your DataAdapter and call Update(). As long as you either have a query or a stored procedure (I think MySQL has this in its latest incarnation). -- Gregory A. Beamer ************************************************* Think Outside the Box! ************************************************* . - References: - Datasets fundamental - From: shyamspandey - Re: Datasets fundamental - From: Cowboy \(Gregory A. Beamer\) - Re: Datasets fundamental - From: Newkid - Re: Datasets fundamental - From: Cowboy \(Gregory A. Beamer\) - Prev by Date: Re: Executing a stored procedure... - Next by Date: Re: Datasets fundamental - Previous by thread: Re: Datasets fundamental - Next by thread: Re: Datasets fundamental - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.adonet/2006-06/msg00262.html
crawl-002
refinedweb
1,485
81.73