text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Remembering that the Facelet in Listing 3 accesses the getCars1() method, you'll notice that the method simply returns the cars1 field, which is defined as a list with the line private List cars1;. How is this list populated? Lifecycle hooks Take a look at the method init() in Listing 4, which is annotated with the @PostConstruct annotation. The @PostConstruct annotation is a lifecycle hook, meaning that it allows the developer to tap into the JSF component lifecycle. The PostConstruct informs JSF to run this code once the component has been constructed, but before it has been rendered. In the body of the init() method, the CarService object (previously injected into the service field) is used to populate the cars1 list with this line: cars1 = service.createCars(10); . Service classes Now let's shoot over to the CarService class. A service class is a class that provides services to other classes, usually by providing services to one more more controllers. Like the MVC pattern, the service class idea is common, and not unique to JSF. Listing 5. DataListView class package org.primefaces.showcase.service; import java.util.ArrayList; import java.util.List; import java.util.UUID; import javax.faces.bean.ApplicationScoped; import javax.inject.Named; import org.primefaces.showcase.domain.Car; @Named(name = "carService") @ApplicationScoped public class CarService { private final static String[] colors; private final static String[] brands; static { colors = new String[10]; colors[0] = "Black"; colors[1] = "White"; // ...other colors... } public List createCars(int size) { List list = new ArrayList(); for(int i = 0 ; i < size ; i++) { list.add(new Car(getRandomId(), getRandomBrand(), getRandomYear(), getRandomColor(), getRandomPrice(), getRandomSoldState())); } return list; } // Utility methods like getRandomId() have been removed } To begin, notice how the code informs JSF this is a managed class: @Named(name = "carService"). In this case, the bean is given a name, "carService", which we've seen in Listing 4 is used as a handle to reference the bean in the DataListView controller. In addition, the class is specified as @ApplicationScoped. In essence, that means that the object instance will live for the duration of the application. The actual Java code for preparing the cars list should be fairly self-documenting. But how will JSF finds these beans? Bean metadata JSF uses the application's metadata to locate the beans it needs; in the past, that meta-data lived primarily in separate XML files, the /WEB-INF/beans.xml. CDI is enabled in Java 7 by default, so that file is not required unless you need something not supported by annotation. (You no longer have to even tell JSF to use bean-discovery-mode="all). Events in JSF Along with being an MVC, component-based framework, JSF is an eventing framework. That means it uses events to handle interactions with users and components. We'll conclude with a quick look at JSF's event-oriented architecture. Ideally, events provide a clean separation of concerns by isolating event generation code from event handling code. JSF events can be divided into two types: server-side events and client-side events. To set up a server-side event, you would create an event handler in Java code. You would then create an event emitter (on the component markup), or an event listener, whose role is to listen for lifecycle events. Client-side events--things like a user clicking a button in the browser--are transparently mapped to event-handlers on the server. Component events are handled in this way. For the most part, the JSF framework abstracts the actual wiring of the events. This is true even of AJAX or Websocket-style eventing. Component-managed events JSF hides the details of how an event works by encapsulating those details inside one or more components. To understand how this works, go back to the DataList example and find Paginator, a dataList that employs events. Figure 3 presents the output of a Paginator dataList. Figure 3. Paginator dataList with events This list uses the same kind of structure as the DataListView in Listing 4 to output the grid, but it has a new feature: if you click on the magnifying glass icon, you will see a popup window displaying a list of links. Listing 6 shows the component that creates those links Listing 6. Magnifying glass links <p:commandLink <f:setPropertyActionListener <h:outputText </p:commandLink> Listing 7 has the component that supplies the actual popup. Listing 7. Popup component <p:dialog <!-- Dialog Content --> </p:dialog> By carefully studying Listings 6 and 7, it becomes clear that in JSF, JavaScript eventing is handled by components. In Listing 6, the JavaScript oncomplete event is used to fire JavaScript code: oncomplete="PF('carDialog').show()". Note that you don't define the function to be called; instead, it's pre-defined by the component internals. The p:dialog component in Listing 7 has given itself a name with widgetVar="carDialog. The work to unite these two into a functioning JavaScript dialog is handled by the PrimeFaces framework and the components themselves. Conclusion JavaServer Faces is the Java standard for creating web-based UIs. As of this writing, JSF 2.3 is the current version, and the reference implementation is Eclipse Mojarra. JSF has been selected for inclusion and further development in Jakarta EE, which is good news for Java developers who want a standard way to develop modern, Java-based web UIs. Learn more about JSF - "Looking at the brand new JSF 2.3: New features for an old favorite" (Josh Juneau, Jaxenter.com) is an overview of JSF 2.3, including highlights not covered here. - "Get started with Servlet 4.0" (Alex Theedom, IBM Developer) includes a section on Servlet 4.0 updates in JSF 2.3, primarily for HTTP/2 and server push. - Take what you've learned to the next level, with "Learn to fully leverage JavaServer Faces" (Eugen Paraschiv, Stackify). This story, "What is JSF? Introducing JavaServer Faces" was originally published by JavaWorld.
https://www.infoworld.com/article/3322533/what-is-jsf-introducing-javaserver-faces.html?page=2
CC-MAIN-2021-10
refinedweb
987
57.27
Red Hat Bugzilla – Bug 171574 Fails with Traceback error.: Fails trying to do an FTP install (23/10/05 boot.iso for i386) when either a graphical or text install is tried. Version-Release number of selected component (if applicable): version in 23/10/05 minstg2.img or stage2.img How reproducible: Always Steps to Reproduce: 1. boot from boot disk created from boot.iso 2. do either default graphical ot text installs via FTP Actual Results: install exits abnormally Expected Results: working install process. Additional info: Traceback (most recent call last): File "/usr/bin/anaconda", line 1074, in ? from yuminstall import YumBackend File "/usr/lib/anaconda/yuminstall.py", line 23, in ? import yum File "/usr/lib/python2.4/site-packages/yum/__init__.py", line 36, in ? import config File "/usr/lib/python2.4/site-packages/yum/config.py", line 34, in ? from repos import variableReplace, Repository File "/usr/lib/python2.4/site-packages/yum/repos.py", line 28, in ? from repomd import repoMDObject File "/usr/lib/python2.4/site-packages/repomd/repoMDObject.py", line 19, in ? from mdErrors import RepoMDError ImportError: cannot import name RepoMdError install exited abnormally . . . The above output was from my second try this time at a text based install but but the previous graphical install failed similarily. Both install attempts were on 23/10/05 between 14:00 and 14:45 BST. Just noticed that there is no minstg2.img for i386 in todays build (24/10/05) Makinng the image failed: Running mkcramfs /tmp/instimage.dir.950 /mnt/redhat/nightly/rawhide-20051024/i386/Fedora/base/minstg2.img Exceeded MAXENTRIES. Raise this value in mkcramfs.c and recompile. Exiting. Build machine had busted version of util-linux still installed somehow. Fixed that, root cause is the util-linux bug *** This bug has been marked as a duplicate of 171337 *** *** Bug 171586 has been marked as a duplicate of this bug. *** *** Bug 171655 has been marked as a duplicate of this bug. ***
https://bugzilla.redhat.com/show_bug.cgi?id=171574
CC-MAIN-2018-26
refinedweb
326
54.29
15 Comments Great post, some findings below: Need to add : import android.content.Context; import android.location.LocationManager; Typo in the code to check location provider : change boolen to boolean. good post! What if the phone is searching for network constantly because no signal?, Ie, no mobile signal network available. the wifi is off and airplane mode disabled, and the method “isProviderEnabled(LocationManager.NETWORK_PROVIDER)” still tell that this network mobile provider is enabled. ¿what do you do? sorry by my bad english Hi Yeferson … no worries on your english … it looks good to me. In the scenario you describe, isProviderEnabled should still return true .. That method (isProviderEnabled) simply indicates whether the provider is on, not whether it’s reliable. What would probably happen is that you would get either very low-accuracy results or old results. In most cases where one is not able to get a phone signal it means that you are out in the middle of nowhere which is when a GPS provider is preferable. So if you find that the network provider is returning old results or results with poor accuracy you’ll want to start the GPS provider and start using it instead. Something you might want to consider is using the Google Play Services location system (something that Google released after I wrote the above post). It takes care of these kinds of scenarios in terms of switching between the network and gps providers, etc. The following link will take you to a page that walks through its usage… I hope that helps, Jim Thanks for the prompt reply. I read on the possibility of obtaining the location with google play services, but in my country, the most android devices use 2.2 version so that the minimum target, and the device I use for testing, does not allow me to use the services of google because it asked me upgrade the apk of google play, but still does not allow me to use google services play because is incompatble. however, your post help me to improve the app development best regards Yeferson .. I’m really glad I could help. […] APIs for checking network availability (I talked about some of the Android network APIs in this discussion of network-based locates) but those APIs don’t tell the whole […] Sir if device gps and network both are not enable to get location than how i can get location ? If both GPS and network locates are disabled, you can’t get the location. Google’s desire is to provide the user with the ability to control whether apps can get the device location. When both of those options are disabled, the user is telling us that they don’t want our apps to get their location. Really good post! Thanks a lot! Thanks Luis! i need to know how to restrict user can disable the location service.if the user disabled means how to send information to server.the particular user are turned off his gps. There’s no way to actually prevent a user from disabling location services. Google’s attitude is that it’s the user’s phone and the user therefore has authority over the settings. 🙂 In terms of being notified when the service is disabled … I have not had to do this myself but you should be able to achieve the desired result with the FusedLocationProviderAPI. Using that API you can register a PendingIntent (directed to a service, for example) that will fire when there’s a change in Location Availability. The corresponding Intent will contain an instance of the LocationAvailability class which you can then use to check whether location is currently available. If the location is not available, you can then check the current device settings to make a determination as to whether the lack of location availability is due to the current device settings. If it is due to the settings, you can then send a message to the server indicating the change. hi hedgehogjim, can we find a user location without using gps or internet but with network provider only??? Sure thing … just indicate that you want to use the network provider when setting up the location provider.
https://jwhh.com/2013/04/03/android-network-locates-when-enabled-is-not-enabled/?replytocom=355
CC-MAIN-2019-35
refinedweb
701
62.27
17 August 2012 05:42 [Source: ICIS news] By Ong Sheau Ling SINGAPORE (ICIS)--An influx of refugees escaping war-torn Syria into Jordan and other neighbouring countries, as well as the upcoming Islamic pilgrimage to Mecca in Saudi Arabia, will lead to a spike in demand for mattresses in the Middle East, market players in the toluene di-isocyanate (TDI) and polyols industry said on Friday. TDI and polyols go into the manufacture of flexible polyurethane (PU) foam, which is used in mattresses. “Till date, there are about 150,000 refugees from ?xml:namespace> As the Syrian crisis worsens, there is a likelihood of more refugees that will bolster demand for PU mattresses, he added. Thousands of Syrians have been fleeing the violence of the political upheaval in their home country since March 2011, to seek shelter in neighbouring nations. About 17,000 people were killed in Official record from the UN High Commission for Refugees (UNCHR) showed that registered refugees from In Pilgrims start the trek towards the Islamic holy land of “There will be close to 2m people attending the Haj, and staying there for at least three, four days,” a Dubai-based trader said. “[Saudi Arabian] foamers have to start producing foams from [the] second half of September, so they will have to start stocking up TDI and polyols beginning end-August,” a northeast Asian producer said. Demand will strongly pick up next week following the Eid-ul-Fitr celebration, which marks the end of the Muslim fasting month of Ramadan, they said. For this reason, some TDI and polyols producers are confident of raising prices for September cargoes. On 16 August, TDI prices were assessed as $2,800-2,900/tonne (€2,268-2,349/tonne) CFR (cost and freight) GCC (Gulf Cooperation Council) and $2,850-2,900/tonne CFR East Med ( Flexible polyols slabstocks prices were at $2,050-2,150/tonne CFR Middle East on 16 August, ICIS data showed. Some “We do not how long the Syrian [political unrest] situation will last, so we can’t be sure how long the support to TDI can be,” a Jordanian buyer said. Globally, there is an oversupply of TDI and demand typically slows down around the fourth quarter, market sources said, and they doubt that prices will increase significantly from the recent pick-up in demand. Polyols prices, on the other hand, may still be on an uptrend in the Middle East, tracking gains in upstream propylene prices in Europe and “Perhaps we may see a $50/tonne increase in polyols for September shipments,” another Dubai-based trader said. European August propylene contracts were settled at €1,055/tonne ($$1,302/tonne), up by €120/tonne from July. Average spot Asian propylene prices were at $1,340/tonne ($1 = €0.81) Additional reporting by Pearl Bantillo
http://www.icis.com/Articles/2012/08/17/9587822/syrian-exodus-muslim-haj-boost-mideast-tdi-polyols-demand.html
CC-MAIN-2015-06
refinedweb
473
52.43
#include <nsDeque.h> DequeIterator is an object that knows how to iterate (forward and backward) through a Deque. Normally, you don't need to do this, but there are some special cases where it is pretty handy. One warning: the iterator is not bound to an item, it is bound to an index, so if you insert or remove from the beginning while using an iterator (which is not recommended) then the iterator will point to a different item. Here you go. Create a copy of a DequeIterator. Moves iterator to first element in the deque.. Retrieve the the iterator's notion of current node. Note that the iterator floats, so you don't need to do: ++iter; aDeque.PopFront(); Unless you actually want your iterator to jump 2 positions relative to its origin. Picture: [1 2i 3 4] PopFront() Picture: [2 3i 4] Note that I still happily points to object at the second index. preform ! operation against two iterators to test for equivalence (or lack thereof)! Post-increment operator Iterator will advance one index towards the end. Pre-increment operator Iterator will advance one index towards the end. Pre-decrement operator Iterator will advance one index towards the beginning. Post-decrement operator Iterator will advance one index towards the beginning. Compare two iterators for increasing order. Standard assignment operator for dequeiterator. Compare two iterators for equivalence. Compare two iterators for non strict decreasing order.
http://doxygen.db48x.net/comm-central/html/classnsDequeIterator.html
CC-MAIN-2019-09
refinedweb
236
50.94
Contributing to Babel: Three Lessons to Remember Getting to work your way around a new code base always poses its challenges, and Babel was no exception. I’ve been working with Babel as part of the Google Summer of Code 2017 program, working to update Babel transforms and the Babylon parser to accommodate changes to specifications and implementing new features. Here’s a few things I’ve learnt from my adventures so far. 1. Yes, communication is important1. Yes, communication is important To start off with getting to know the codebase better, I combed through the open issues list on Babel and found a relatively easy one (issue #5728) to deal with. Just to make sure I knew what I was doing, I fired a quick question on the thread: After getting clarification, I set off to change the plugin to not throw "runtime" errors during transpilation, but only when the code is actually being run. One incriminating piece of code stuck out: for (const violation of (binding.constantViolations: Array)) { throw violation.buildCodeFrameError(messages.get("readOnly", name)); } Now what needed to be done here was to actually insert a throw statement into the generated code, which didn’t prove to be too difficult. However, there were still a few cases where runtime errors were being thrown elsewhere from code that wasn’t directly related to this file. Wanting to go and explore other parts of the Babel code base, I put that down for me to get on with later. Not too long after, I received a, well, interesting update on the issue… Wait what? I never actually said I was working on fixing the issue, but assumed that posting would have implied I was going to work on it. Oops. 2. Where snapshot testing falls short2. Where snapshot testing falls short After setting off for another hunt, I stumbled across issue #5656: Arguments deoptimized when shadowed in nested functionArguments deoptimized when shadowed in nested function This is a feature request (I think). Arguments are not optimized if an inner function shadows the name with a parameter (or rest parameters in my case). Input codeInput code const log = (...args) => console.log(...args); function test_opt(...args) { log(...args); } function test_deopt(...args) { const fn = (...args) => log(...args); fn(...args); } ... Expected vs. Current BehaviorExpected vs. Current Behavior I’d expect the code to be optimizable to use .apply( thisArg, arguments ) throughout. However, in test_deopt the outer ...args gets copied just to be passed into the inner fn. I can verify that the problem disappears if I rename either the ...args of test_deopt or of the fn arrow function. What’s going on here?What’s going on here? Now what was happening was that this code would generate the following: var log = function log() { var _console; return (_console = console).log.apply(_console, arguments); }; function test_opt() { log.apply(undefined, arguments); } function test_deopt() { for (var _len = arguments.length, args = Array(_len), _key = 0; _key < _len; _key++) { // unnecessary loop args[_key] = arguments[_key]; } var fn = function fn() { return log.apply(undefined, arguments); }; fn.apply(undefined, args); } See that for section there? Usually this is needed as the arguments object isn’t a real array — for example, if you tried to run arguments.slice(), it would fail miserably. However, in this case it’s only being passed to Function.prototype.apply. Surprisingly enough, Babel already bothers to optimize this specific case, like in the test_opt example above. Trying to fix itTrying to fix it So what did I do? Adding the problem file as a new test case, I tried to see if I could get the output to reflect what I wanted. “Why’s the test failing? Surely if I change it a little it will solve itself.” Despite spamming make test-only and modifying the transforms of referenced identifiers within the code, any change just resulted in a different bunch of tests failing instead. The Chromium debugger is “fun”The Chromium debugger is “fun” Miserable, annoyed and confused, I bothered to fire up the Node.js inspector to step through what was going on. After returning to my computer from a drink break, I’m gladly greeted to my hard disk light thrashing around and a practically hung computer. Holding my computer together with judicious applications of Alt + SysRq + F, I managed to work through the flow of things¹ and figure out how exactly the code worked. Even through all that, I still couldn’t see any reason why it was deciding to remove this “necessary” (so I thought) code that was being removed with my original fix. The actual problem?The actual problem? See the error shown above? That entire code in green wasn’t meant to be there, even though it was “expected”. Basically: the test was broken. Great. :/ The actual fix involved creating a referencesRest function to make sure that the spread operator was actually being applied to the original parameter, rather than a variable in another scope masking the variable. ¹: Turns out that adding a large folder to the DevTools workspace would leak memory until causing an OOM (bug I filed for this). So why do we use snapshot testing then?!So why do we use snapshot testing then?! Well first off, it's far easier to create tests when all you need to do is ask Babel to run your test case to generate your expected file. This presents to us a low time cost option while protecting against a significant proportion of potential errors. Also, especially with the type of program Babel is, it would be far harder to test for in other ways. For example, we could check for specific nodes of the AST, but this takes far longer to write and is also prone to non-obvious breakage when your code attempts to change the way the transform is done. So, all in all, a few lessons here: - Make sure your tests are right in the first place—don't be complacent! - Yes, the debugger is actually useful in seeing what goes on. - Sometimes things take time to work out—if you’re getting nowhere, take a break or work on something else. 3. Team meetings!3. Team meetings! I know this kinda stretches the notion of an “issue”, but anyway :) When you’re working on a project with a bunch of other people, it’s always useful to catch up with one another and discuss areas which we need to work on. So how exactly do we go about doing that?! Ugh, meetings. When you have a bunch of people spread across the world, finding ways to communicate is never easy, but regardless we would have to make do with our attempts at this feat. Time zonesTime zones When you’re dealing with a open source project spanning all across the globe, picking an appropriate hour quickly turns a rather involved exercise in bikeshedding. Even with the vast spread between each of us, it seemed like we could just about manage to finally get something together. Alas, this was not to last. Eventually, we ended up having to switch between two times every other week to accommodate other users (13:00 and 16:00 UTC), which meant that I was only able to attend once a fortnight. Despite this, we’ve managed to make significant progress with coordinating fixes to various parts that make up key changes to Babel, including support for TypeScript, changes to the order in which transform plugins run, as well as keeping up to date with changes from TC39. Where to next?Where to next? We’re continuing to polish up Babel 7 for general consumption, with a number of new features coming along with that. I’m working with a bunch of others to get support for updated Class Fields specification proposal included into Babel so that people can test it out and provide feedback. Also, while I’m at it, I’d like to thank all of the Babel mentors and contributors for helping me out with peer reviews and providing guidance with proposals, all the way from first contact to today. Looking to find out more about Babel? Hit up our contributing page and join the Slack community! More about KarlMore about Karl Karl Cheng is a GSoC 2017 student hailing from Sydney, Australia. Find out more about him on GitHub (Qantas94Heavy) and Twitter (@Qantas94Heavy)! Please check out our first post on Summer of Code for more info!
https://babeljs.io/blog/2017/08/16/gsoc-karl-1.html
CC-MAIN-2019-47
refinedweb
1,410
63.49
If you've been following me at all on this site, or on Twitter, you've heard me mention several times about Metron, which is a JavaScript library I've been writing, written in TypeScript. Metron exists as many things: a convenience library, a routing system, a templating engine, and a full front-end framework. Any good library, however, often stands on the shoulders of giants--or maybe StackOverflow code and open source GitHub repositories. Metron needs to be compatible with Internet Explorer 11 at the moment, so native Promises aren't available. As such, we've added RSVP as a dependency. In addition, the framework works best with the Milligram CSS library. There is also an optional dependency of Awesomplete until I get around to custom building an autocomplete control. Another place where Metron borrows from the community is with its markdown engine. A nice bit of compact markdown code is available on GitHub from Mathieu Henri. This did most of what Metron needed, so rather than reinvent the wheel, I incorporated it into the Metron library, and gave appropriate credit. A couple of modifications needed to be made. One of the most powerful aspects of TypeScript is that it is statically typed. When Henri's code is plugged into the TypeScript language service, and an attempt is made to add appropriate typing, we see some immediate issues where static typing is being less flexible than his initial JavaScript code--specifically, there are a few instances where numbers are later being reassigned to strings, and vice versa. The first thing I did was clean this up. It meant more code, but it also meant slightly more structure. The next step was to pull out a couple of instances of inline code that was compact, but a little hard to read. For example, these two functions were previously directly in a ternary operator, but I pulled them out, and renamed a bunch of variables to make things more readable and meaningful. function processWrappedMarkdown(prependType: Array<string>, line: string): string { return prependType[1] + ("\n" + line) .split(prependType[0]) .slice(1) .map(prependType[3] ? escape : inlineEscape) .join(prependType[3] || "</li>\n<li>") + prependType[2]; } function processSemanticMarkdown(char: any, line: string): string { return (char == "#") ? ("<h" + (char = line.indexOf(" ")) + ">" + inlineEscape(line.slice(char + 1)) + "</h" + char + ">") : (char == "<" ? line : "<p>" + inlineEscape(line) + "</p>") } The second function still has complexity with its own ternary operators, but imagine multiple nested ones even beyond this. Other improvements were made to the markdown engine itself: support for line breaks (converting to <br /> elements) rather than just paragraph tags, and allowing HTML inside of markdown without it escape by using the function override of the replace() method. This all worked great inside of Metron, but then to use the Metron CLI as a publishing engine, I also needed the code in there. At first I duplicated the code, which is a clear violation of quality coding, but the CLI is highly experimental. Eventually, I extracted the code from the library, and created a new module specifically for the markdown engine. Although the markdown code is still present in the Metron library, the CLI now has this markdown module as a dependency, and I've published it out to NPM. To install it, you can run from the command line: npm install metronical.markdown --save The module transpiles as a UMD module targeting ES5 (since a lot of my work still needs to function in Internet Explorer 11, as mentioned above). Since this is TypeScript, the installation comes equipped with the Type Definition file, so you can import it into your own TypeScript projects. There's still more work to do on this module, and most of it will get fleshed out as I complete the publishing engine inside of the Metron CLI, but for now, if you're in need of a lightweight markdown engine, be my guest and download this one. Let me know what you think. You can also submit a pull request too!
https://codepunk.io/spinning-out-the-markdown-engine-in-the-metron-typescript-library/
CC-MAIN-2019-30
refinedweb
666
60.45
The official Docker repository doesn't provide any real signature system. You can build the container by yourself (a simple clone then docker build . will be enough). The build will install the Agent from our signed APT repo. We recognize that Docker does not recommend using the --privileged flag in production environments. This flag was part of our work to add the missing metrics (such as network) to the Agent container. However it doesn't bring any additional metrics for now, so you can drop it completely. We don't ask for specific kernel capabilities so the container should run with the default set. These are the minimum capabilities required by Docker. We ask to mount into the Agent a few directories: - The Docker daemon socket. This is RW so that we can query it to get the list of containers and events. - The list of mounts and cgroups to collect Docker performance metrics. This is RO. - If these mounts are a problem for you, you can remove them all but the Docker integration will no longer work. If you use some integrations, you may need to link containers with the Agent so that it can contact it (example: get the status page from Apache). There is no restriction around that (you can namespace, restrict it, etc.) as long as the Agent can get the HTTP page. You may also want to use DogStatsD, in which case other containers will send UDP datagrams to the Agent.
https://help.datadoghq.com/hc/en-us/articles/204267415-Datadog-security-in-Docker-production-environments
CC-MAIN-2017-47
refinedweb
246
74.69
This. Step 1: Parts 2. Arduino, I'm using an Arduino mega but a standard one should have just enough pins. 3. LCD character display. 4. Some odds and ends including some wire and a 1MΩ resistor. 5. A computer, you know, that thing your using to read my instructable with. 6. Patience. Step 2: Connecting the LCD and letting your creation talk to the world. Your lcd has 16 throe hole solder pads so the first thing is to attach some pins. If your patent then I recommend purchasing a header like this. But if you want to get done as fast as possible (like me) then you can use wire. Simple cut 16 pieces of wire at about 1/2" (13mm (longer is okay)). Then solder them to the board. Step 3: Connecting the LCD Continued. Pin 1 Ground Pin 2 +5 Volt Pin 3 Contrast adjust Pin 4 RS Pin 5 R/W Goes to Ground Pin 6-14 Data Pin 15 Back-light Power Pin 16 Back-light Ground Step 4: Data Lines It doesn't mater what pins you use, but I recommend following the schematic. 1. Since temperature could change the result, I was wondering if a temperature sensor was implemented, could this keep it calibrated? I assume the liquid temperature would be most important. I was thinking a temperature sensor attached to the water bottle at the lowest point. 2. The instructions mentioned that shielded cable would work better. Would you put the shield to the ground piece of foil and the center wire to the other foil (sense)? I did notice that the results we jumpy and that my hand would throw it off. I wonder if the shielded cable would help the result be more consistent and prevent the noise (from my hand). 3. I see in the image of the water bottle that the wire is soldered to the foil and then wrapped around the bottle a few times. It also appears you twisted the lead back. Are these important steps as well? Would they help with the noise? Would they be required if you are using shielded wire? I also saw another writeup on this method and it mentions using an insulator over the foil and then another layer all the way around that is a ground plane. Thanks! 2: Two conductor shielded cable. The shield should not be used as a conductor. 3: It's not wrapped around the bottle? The cable I used at the time was a twisted pair from an Ethernet cable. 4: An insulator and extra layer of foil would act as a shield A far better option is a float attached to a potentiometer. I only used this because I could not put anything inside the tank. 3. What do you mean by saying: "It's not wrapped around the bottle?". It's must be wrapped around the bottle or not? Thanks ! Oh, I must get this code to work! This project is fantastic and I need it in one of my larger projects. Sadly, I am still very much a nube and there are apparently obsolete keywords and syntax in this sketch that are kicking my butt. Any help you could offer that would allow me to get this running on an Uno with Arduino 1.6.3 would be most appreciated. Thanks! Here's the error log: Arduino: 1.6.3 (Windows 8.1), Board: "Arduino Uno" CapLiquidMeter.ino:7:21: error: 'f' was not declared in this scope CapLiquidMeter.ino:14:2: error: 'CapSense' does not name a type CapLiquidMeter.ino: In function 'void setup()': CapLiquidMeter.ino:31:18: error: expected primary-expression before ')' token CapLiquidMeter.ino:33:18: error: expected primary-expression before ')' token CapLiquidMeter.ino:35:18: error: expected primary-expression before ')' token CapLiquidMeter.ino:37:18: error: expected primary-expression before ')' token CapLiquidMeter.ino: In function 'void loop()': CapLiquidMeter.ino:51:11: error: 'cs_22_23' was not declared in this scope CapLiquidMeter.ino:90:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:97:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:104:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:111:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:118:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:125:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:132:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:139:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:146:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:153:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:160:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:167:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:174:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:181:19: error: 'BYTE' was not declared in this scope CapLiquidMeter.ino:188:19: error: 'BYTE' was not declared in this scope Error compiling. This report would have more information with "Show verbose output during compilation" enabled in File > Preferences. I was able to compile without errors but not tested it. Get the Capacitive Sensor library here and install it. Maybe you can istall using the library manager that will download it for you. Change the include in #include <LiquidCrystal.h> #include <CapacitiveSensor.h> Since the library has a new name you must change every reference to the old library in a reference of the new library. change CapSense cs_22_23 = CapSense(22,23); in CapacitiveSensor cs_22_23 = CapacitiveSensor(22, 23); change fuel = cs_22_23.capSenseRaw(200); in fuel = cs_22_23.capacitiveSensorRaw(200); Change every line with lcd.print(x, BYTE); where x can be 1,2,3 or 4 with lcd.write(x); That's all! Good Luck! now this is a REAL project. I love it. Hi ! Could you please explain in detail, how to make the capacitor. I didnt got the exact idea behind it. hi i am trying to build a fuel sensor based on the above schematic using an arduino UNO. the sensor works fine with water but it does no work with petrol(gasoline). i think this is because petrol has a lesser dielectric constant compared to water. presently i am winding the foil on the opposite sides of the tank. please suggest me a method to increase the sensitivity of the sensor. i cannot do anything from inside the tank so please suggest alternatives which can be implemented externally THANKS :) Hi ! You said this project could be done with a standard Arduino board but i am wondering whether an Arduino Uno would serve the purpose, as it does not have send/receive pins. What do you think ? how the resistor and the ground are connect to the arduino? what is the material used in capacitence Hi. First of all, congratulations for your experiment. I just did not understand what ''fuel = fuel - 7200'' and the ''fuel = fuel/93'' do. Thank you! 7200 is the capcitence without liquide. It's you're zero adjust. 93 is the adjustment for a full tank. Excellent structable. I am interested in measuring and logging the tide level in a saltwater canal. Do you think your concept would work for that? I would use stainless steel for the probe. I had planned to use pressure differential but I think I like the capacitance idea better. A ss tube and ss rod in the middle would work, but you'll need to insulate it somehow. A thick player of polyurethane (it's the clear used with automotive paint) might do. Only the rod, the tube would be grounded. That setup would be a lot more linear then what I built. so calibration would probably be easier. Just need a reading with it above the water, and one all the way in. You mentioned “The sensor works by charging the capacitor (the water bottle) and measuring how long it takes to drain through the resistor.” Can you be a bit more specific on the process? Thanks. Cheers One acts as a ground, the other sens. doesn't matter witch way around you set it up. The sheets should go from the bottom to top of the container. It's a bit of a pain to solder aluminum foil but can be done. otherwise anything conductive. I have pet cockatiels that love to backwash in their water dish. It doesn't take long for the water to get kinda nasty. I wonder if something like this could be used to alert me when their water needs a change out. Have you ever noticed a significant reaction to the electrolyte getting contaminated? Any tips you can offer would be much appreciated. Thanks for the inspiration. Using an infrared led on one side and a receiver on the other could be used to detect when impuritys where in the water. I considered the infrared idea but they like to bathe in the water too, so that might trip it. Plus, they like to break things, so I was hoping some submerged concentric metals tubes would be beak proof. I guess it's a trivial problem but a fun challenge. Thanks for the reply. I haven't noticed (or have overlooked) any max practical size for your setup but I assume it might not work out for my requirements, right? Could anyone point me in the right direction? Maybe sink a pressure sensor to the bottom of the tank and read out the static pressure, using an Arduino? Your best bet would be to use a couple metal pipes. one inside the other making shire they don't touch. Then use this setup to measure the capacitance between them. Another option is a simple float inside a tube, with a small magnet attached to it. Then put some read switches along the length of the pipe. 1: so the two aluminum sheets sit outside the container? one on either side of it? 2: would there not be a risk of the capacitor discharging and creating a spark? Comment: NO METAL CONTAINERS!? I guess a keg level sensor is out of the picture lol.
http://www.instructables.com/id/Building-a-Capacitive-Liquid-Sensor/CWT15DTG2WM21NG
CC-MAIN-2015-32
refinedweb
1,705
76.72
For a deeper look into our Eikon Data API, look into: Overview | Quickstart | Documentation | Downloads | Tutorials | Articles Hi, using TR.TotalReturn to get the monthly time series of returns for an index doesn't work e.g. df, e = ek.get_data(['.FTLC'],['TR.TotalReturn.date','TR.TotalReturn'],parameters={'SDate':'0M','EDate':'-59M','Frq':'M','CH':'Fd'}). Can you please confirm what I should be using? Total return for indices is not available through Eikon Data APIs. However you can get it using Datastream Web Service. import DatastreamDSWS as dsws ds = dsws.Datastream(username='XXXXXXX', password='XXXXXXX') df = ds.get_data(tickers='<.FTLC>', fields=['RI'], start='-5Y', end='-0M', freq='M') df['Total Return'] = df.pct_change() dfIf you don't have credentials for Datastream Web Service (Datastream Child ID and password), contact your Refinitiv account team. @mohammed_sabur, I think you need to clarify what you meant by “total return” first. The RIC: “.FTLC” itself is a capital return index (CR) as the index do not include the distributions from the underlying assets (constituents). There is a total return (TR) version of the same index and its RIC is “.TRINMX”. You can check the return from the TR version is always higher than or equal to the CR version because the distributions of the underlying assets are reinvested in the TR version. Since index itself never pays dividends, there is only one “return” you can derive from the index value itself. The return of an index is simply: (Current index value / Previous index value) – 1 You should able to calculate that from the index time-series directly. Typically the historical value of an index can be obtained by TR.CLOSEPRICE field. You should set the “Adjusted” flag to 1 so the historical data is adjusted in case there is a split or merge event (re-base of the index), but that is very rare for indices. Thanks. Both answers help.
https://community.developers.refinitiv.com/questions/45549/time-series-total-return-for-index-benchmark-eg-ft.html
CC-MAIN-2022-21
refinedweb
317
56.45
25 February 2009 15:37 [Source: ICIS news] By Katrina Chen SHANGHAI (ICIS news)--Losing sight of long-term opportunities is very easy in the current global economic turmoil. One major opportunity has to be the growth potential for polyvinyl chloride (PVC) in ?xml:namespace> And, even over the next few months, the prospects are far from all doom and gloom. The yuan (CNY) 4,000bn ($585bn) government stimulus package promises to provide a much-needed boost to struggling end-use industries such as real estate. But the question that always has to be raised these days over Growth in local capacity might be slowing down on a temporary decline in consumption and weaker acetylene-based economics. But the country is already a significant exporter in If quality and other concerns can be addressed, PVC pipes only account for 50% of the total market in As a result, CBI is forecasting 10% demand growth until 2010, buoyed by the priorities of The profiles sector is also at a relatively early stage of development. Compulsory regulations are expected to be put in place, boosting profile use in steel-reinforced plastic windows and doors, the report said. “These regulations will include tightening the minimum threshold for small companies, reinforcing quality supervision – and tougher measure against unqualified enterprises,” says CBI. Profiles are forecast to grow by more than 9% per year until 2010, with demand for film and wire and cable applications expected to grow by 7% and 4% respectively. Artificial leather will also grow by 4%. However, overall apparent consumption slipped by 10% to 8.8m tonnes in 2008 compared with the previous year, CBI said. The economic downturn has not only damaged consumption. As oil prices fell, acetylene-based producers – which provide the majority of Acetylene players were also hit by the rising cost of coal and carbide – their raw materials. “Some of the acetylene producers are looking to convert to ethylene-based technology and are searching for procurement sources,” said the CBI report. Lower growth and the shift in economics have led to the postponement of numerous projects. Capacity grew by 25% per year between 2002 and 2006. But output is expected to grow by only 4.4% per year in 2008-2012, estimates CBI. The industry also confronts long-term structural problems that have already led to consolidation. “Sixty-two per cent of capacity is in the north, mid and west of “This means that significant volumes of PVC are transported from province to province, which can be an issue given poor inland infrastructure. “Problematic logistics have helped foster the development of small, inland chlor-alkali-vinyls operations serving limited local markets.” The government, though, is encouraging new, bigger plants, and industry consolidation has improved economies of scale. Consolidation has involved either expansions or mergers, says CBI. But although LG Dagu and Shanghai Chlor-Alkali Chemical Corp each have the capability to produce more than 400,000 tonnes/year, most of the producers in the north and west are running plants that produce less than 50,000 tonnes/year. What keeps these smaller operators competitive are low labour costs and plentiful raw materials. The industry also faces overseas end-user concerns – real or otherwise – over specifications and quality, long-term supply security and lack of production in some value-added grades. Additional problems are difficulties in mixing acetylene with ethylene-derived PVC; small-scale packaging, which is usually used in This means that imports exceed exports even though Imports were at 800,000 tonnes in 2008, are forecast to rise 1.4m tonnes this year, will decline to around 1.2m tonnes in 2010 and will be at roughly 1m tonnes in 2011 and 2012, says CBI. Exports were 600,000 tonnes in 2008 and are expected to fall below this level in 2009-2012. “Outside Asia, Chinese product is only sold in significant quantities in Exports surged from September 2005 up until mid-2006. However, CBI says that trade barriers have since resulted in a stagnation in volumes. Still, any other PVC industry in any other country in the world would love to be set to enjoy demand growth of more than 9% over the next two years. Economic development and government policies look certain to ensure that ($1 = CNY6.84) By Katrina Chen.
http://www.icis.com/Articles/2009/02/25/9195523/insight-looking-to-healthy-growth-for-china-pvc.html
CC-MAIN-2013-20
refinedweb
716
52.29
Hello Paul,On Friday 12 December 2008 03:48, Paul Mackerras wrote:>> > The assumption is that THREAD_SIZE is a power of 2, as is PAGE_SIZE.> > I think Yuri should be increasing THREAD_SIZE for the larger page> sizes he's implementing, because we have on-stack arrays whose size> depends on the page size. I suspect that having THREAD_SIZE less than> 1/8 of PAGE_SIZE risks stack overflows, and the better fix is for Yuri> to make sure THREAD_SIZE is at least 1/8 of PAGE_SIZE. (In fact, more> may be needed - someone should work out what fraction is actually> needed.) Right, thanks for pointing this. I guess, I was just lucky since didn't run intoproblems with stack overflows. So, I agree that we should increase the THREAD_SIZE in case of 256KB pages up to 1/8 of PAGE_SIZE, that is up to 32KB. There is one more warning from the common code when I use 256KB pages: CC mm/shmem.omm/shmem.c: In function 'shmem_truncate_range':mm/shmem.c:613: warning: division by zeromm/shmem.c:619: warning: division by zeromm/shmem.c:644: warning: division by zeromm/shmem.c: In function 'shmem_unuse_inode':mm/shmem.c:873: warning: division by zero The problem here is that ENTRIES_PER_PAGEPAGE becomes 0x1.0000.0000when PAGE_SIZE is 256K. How about the following fix ?diff --git a/mm/shmem.c b/mm/shmem.cindex 0ed0752..99d7c91 100644--- a/mm/shmem.c+++ b/mm/shmem.c@@ -57,7 +57,7 @@ #include <asm/pgtable.h> #define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long))-#define ENTRIES_PER_PAGEPAGE (ENTRIES_PER_PAGE*ENTRIES_PER_PAGE)+#define ENTRIES_PER_PAGEPAGE ((unsigned long long)ENTRIES_PER_PAGE*ENTRIES_PER_PAGE) #define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512) #define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + (ENTRIES_PER_PAGEPAGE/2) * (ENTRIES_PER_PAGE+1))@@ -95,7 +95,7 @@ static unsigned long shmem_default_max_inodes(void) } #endif -static int shmem_getpage(struct inode *inode, unsigned long idx,+static int shmem_getpage(struct inode *inode, unsigned long long idx, struct page **pagep, enum sgp_type sgp, int *type); static inline struct page *shmem_dir_alloc(gfp_t gfp_mask)@@ -533,7 +533,7 @@ static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end) int punch_hole; spinlock_t *needs_lock; spinlock_t *punch_lock;- unsigned long upper_limit;+ unsigned long long upper_limit; inode->i_ctime = inode->i_mtime = CURRENT_TIME; idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;@@ -1175,7 +1175,7 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo) * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache */-static int shmem_getpage(struct inode *inode, unsigned long idx,+static int shmem_getpage(struct inode *inode, unsigned long long idx, struct page **pagep, enum sgp_type sgp, int *type) { struct address_space *mapping = inode->i_mapping;Regards, Yuri
http://lkml.org/lkml/2008/12/18/46
CC-MAIN-2014-42
refinedweb
433
54.22
/* System description file for Windows N. */ /* * Define symbols to identify the version of Unix this is. * Define all the symbols that apply correctly. */ /* #define UNIPLUS */ /* #define USG5 */ /* #define USG */ /* #define HPUX */ /* #define UMAX */ /* #define BSD4_1 */ /* #define BSD4_2 */ /* #define BSD4_3 */ /* #define BSD */ /* #define VMS */ #ifndef WINDOWSNT #define WINDOWSNT #endif #ifndef DOS_NT #define DOS_NT /* MSDOS or WINDOWSNT */ #endif /* If you are compiling with a non-C calling convention but need to declare vararg routines differently, put it here */ #define _VARARGS_ __cdecl /* If you are providing a function to something that will call the function back (like a signal handler and signal, or main) its calling convention must be whatever standard the libraries expect */ #define _CALLBACK_ __cdecl /* SYSTEM_TYPE should indicate the kind of system you are using. It sets the Lisp variable system-type. */ #define SYSTEM_TYPE "windows-nt" #define SYMS_SYSTEM syms_of_ntterm () #define NO_MATHERR #define HAVE_FREXP #define HAVE_FMOD /* NOMULTIPLEJOBS should be defined if your system's shell does not have "job control" (the ability to stop a program, run some other program, then continue the first one). */ /* #define NOMULTIPLEJOBS */ /* Emacs can read input using SIGIO and buffering characters itself, or using CBREAK mode and making C-g cause SIGINT. The choice is controlled by the variable interrupt_input. Define INTERRUPT_INPUT to make interrupt_input = 1 the default (use SIGIO) Emacs uses the presence or absence of the SIGIO macro to indicate whether or not signal-driven I/O is possible. It uses INTERRUPT_INPUT to decide whether to use it by default. SIGIO can be used only on systems that implement it (4.2 and 4.3). CBREAK mode has two disadvantages 1) At least in 4.2, it is impossible to handle the Meta key properly. I hear that in system V this problem does not exist. 2) Control-G causes output to be discarded. I do not know whether this can be fixed in system V. Another method of doing input is planned but not implemented. It would have Emacs fork off a separate process to read the input and send it to the true Emacs process through a pipe. */ #define INTERRUPT_INPUT /* Letter to use in finding device name of first pty, if system supports pty's. 'a' means it is /dev/ptya0 */ #define FIRST_PTY_LETTER 'a' /* * Define HAVE_TERMIOS if the system provides POSIX-style * functions and macros for terminal control. * * Define HAVE_TERMIO if the system provides sysV-style ioctls * for terminal control. * * Do not define both. HAVE_TERMIOS is preferred, if it is * supported on your system. */ /* #define HAVE_TERMIOS */ /* #define HAVE_TERMIO */ /* * Define HAVE_TIMEVAL if the system supports the BSD style clock values. * Look in <sys/time.h> for a timeval structure. */ #define HAVE_TIMEVAL struct timeval { long tv_sec; /* seconds */ long tv_usec; /* microseconds */ }; struct timezone { int tz_minuteswest; /* minutes west of Greenwich */ int tz_dsttime; /* type of dst correction */ }; void gettimeofday (struct timeval *, struct timezone *); /* * Define HAVE_SELECT if the system supports the `select' system call. */ /* #define HAVE_SELECT */ /* * Define HAVE_PTYS if the system supports pty devices. */ /* #define HAVE_PTYS */ /* * Define NONSYSTEM_DIR_LIBRARY to make Emacs emulate * The 4.2 opendir, etc., library functions. */ /* #define NONSYSTEM_DIR_LIBRARY */ /* Define this symbol if your system has the functions bcopy, etc. */ #define BSTRING #define bzero(b, l) memset(b, 0, l) #define bcopy(s, d, l) memcpy(d, s, l) #define bcmp(a, b, l) memcmp(a, b, l) /* */ /* Define this if your operating system declares signal handlers to have a type other than the usual. `The usual' is `void' for ANSI C systems (i.e. when the __STDC__ macro is defined), and `int' for pre-ANSI systems. If you're using GCC on an older system, __STDC__ will be defined, but the system's include files will still say that signal returns int or whatever; in situations like that, define this to be what the system's include files want. */ /* #define SIGTYPE int */ /* If the character used to separate elements of the executable path is not ':', #define this to be the appropriate character constant. */ #define SEPCHAR ';' /* ============================================================ */ /* Here, add any special hacks needed to make Emacs work on this system. For example, you might define certain system call names that don't exist on your system, or that do different things on your system and must be used only through an encapsulation (Which you should place, by convention, in sysdep.c). */ /* Define this to be the separator between path elements */ #define DIRECTORY_SEP '\\' /* Define this to be the separator between devices and paths */ #define DEVICE_SEP ':' /* We'll support either convention on NT. */ #define IS_DIRECTORY_SEP(_c_) ((_c_) == '/' || (_c_) == '\\') #define IS_ANY_SEP(_c_) (IS_DIRECTORY_SEP (_c_) || IS_DEVICE_SEP (_c_)) /* The null device on Windows NT. */ #define NULL_DEVICE "NUL:" #define EXEC_SUFFIXES ".exe:.com:.bat:" #ifndef MAXPATHLEN #define MAXPATHLEN _MAX_PATH #endif #define LISP_FLOAT_TYPE #define HAVE_DUP2 1 #define HAVE_RENAME 1 #define HAVE_RMDIR 1 #define HAVE_MKDIR 1 #define HAVE_GETHOSTNAME 1 #define HAVE_RANDOM 1 #define USE_UTIME 1 #define HAVE_MOUSE 1 #define HAVE_TZNAME 1 #ifdef HAVE_NTGUI #define HAVE_WINDOW_SYSTEM #define HAVE_FACES #endif #define MODE_LINE_BINARY_TEXT(_b_) (NILP ((_b_)->buffer_file_type) ? "T" : "B") /* These have to be defined because our compilers treat __STDC__ as being defined (most of them anyway). */ #define access _access #define chdir _chdir #define chmod _chmod #define close _close #define creat _creat #define dup _dup #define dup2 _dup2 #define execlp _execlp #define execvp _execvp #define getpid _getpid #define index strchr #define isatty _isatty #define link _link #define lseek _lseek #define mkdir _mkdir #define mktemp _mktemp #define open _open #define pipe _pipe #define read _read #define rmdir _rmdir #define sleep nt_sleep #define unlink _unlink #define umask _umask #define utime _utime #define write _write #define _longjmp longjmp #define spawnve win32_spawnve #define wait win32_wait #define signal win32_signal #define rindex strrchr #define ctime nt_ctime /* Place a wrapper around ctime (see nt.c). */ #ifdef HAVE_NTGUI #define abort win32_abort #endif /* Defines that we need that aren't in the standard signal.h */ #define SIGHUP 1 /* Hang up */ #define SIGQUIT 3 /* Quit process */ #define SIGTRAP 5 /* Trace trap */ #define SIGKILL 9 /* Die, die die */ #define SIGPIPE 13 /* Write on pipe with no readers */ #define SIGALRM 14 /* Alarm */ #define SIGCHLD 18 /* Death of child */ /* For integration with MSDOS support. */ #define getdisk() (_getdrive () - 1) #define getdefdir(_drv, _buf) _getdcwd (_drv, _buf, MAXPATHLEN) #define EMACS_CONFIGURATION get_emacs_configuration () #define EMACS_CONFIG_OPTIONS "NT" /* Not very meaningful yet. */ /* Define this so that winsock.h definitions don't get included when windows.h is... I don't know if they do the right thing for emacs. For this to have proper effect, config.h must always be included before windows.h. */ #define _WINSOCKAPI_ 1 /* Defines size_t and alloca (). */ #include <malloc.h> /* We have to handle stat specially. However, #defining stat to something else not only redefines uses of the function, but also redefines uses of the type struct stat. What unfortunate parallel naming. */ #include <sys/stat.h> struct nt_stat { struct _stat statbuf; }; #ifdef stat #undef stat #endif #define stat nt_stat #define st_dev statbuf.st_dev #define st_ino statbuf.st_ino #define st_mode statbuf.st_mode #define st_nlink statbuf.st_nlink #define st_uid statbuf.st_uid #define st_gid statbuf.st_gid #define st_rdev statbuf.st_rdev #define st_size statbuf.st_size #define st_atime statbuf.st_atime #define st_mtime statbuf.st_mtime #define st_ctime statbuf.st_ctime /* Define for those source files that do not include enough NT system files. */ #ifndef NULL #ifdef __cplusplus #define NULL 0 #else #define NULL ((void *)0) #endif #endif /* For proper declaration of environ. */ #include <stdlib.h> /* Emacs takes care of ensuring that these are defined. */ #ifdef max #undef max #undef min #endif /* ============================================================ */
https://emba.gnu.org/emacs/emacs/-/blame/97aab3a23baa2f8605bf006164173679e69d5802/src/s/ms-w32.h
CC-MAIN-2021-31
refinedweb
1,208
55.34
2 Old Functions Like apply from scheme/base, but without support for keyword arguments. Like prop:procedure from scheme/base, but even if the property’s value for a structure type is a procedure that accepts keyword arguments, then instances of the structure type still do not accept keyword arguments. (In contrast, if the property’s value is an integer for a field index, then a keyword-accepting procedure in the field for an instance causes the instance to accept keyword arguments.) Like open-input-file, etc. from scheme/base, but the mode, exists, and module-mode (corresponds to #:for-module?) arguments are not keyword arguments. When both mode and exists or module-mode are accepted, they are accepted in either order. Changed in version 6.0.1.6 of package compatibility-lib: Added the module-mode argument to open-input-file. The same as syntax->datum and datum->syntax. The module-identifier=?, etc. functions are the same as free-identifier=?, etc. in scheme/base. The free-identifier=? procedure returns Creates a namespace with mzscheme attached. If the mode is empty, the namespace’s top-level environment is left empty. If mode is 'initial, then the namespace’s top-level environment is initialized with (namespace-require/copy 'mzscheme). See also make-base-empty-namespace. Equivalent to (namespace-require `(for-syntax ,req)). Returns #t if v is a hash table created by make-hash-table or make-immutable-hash-table with the given flags (or more), #f otherwise. If flag2 is provided, it must be distinct from flag and 'equal cannot be used with 'eqv, otherwise the exn:fail:contract exception is raised. Creates and returns a new hash table. If provided, each flag must one of the following: 'weak — creates a hash table with weakly-held keys via make-weak-hash, make-weak-hasheq, or make-weak-hasheqv. 'equal — creates a hash table that compares keys using equal? instead of eq? using make-hash or make-weak-hash. 'eqv — creates a hash table that compares keys using eqv? instead of eq? using make-hasheqv or make-weak-hasheqv. By default, key comparisons use eq? (i.e., the hash table is created with make-hasheq). If flag2 is redundant or 'equal is provided with 'eqv, the exn:fail:contract exception is raised. Like make-immutable-hash, make-immutable-hasheq, or make-immutable-hasheqv, depending on whether an 'equal or 'eqv flag is provided. The same as hash-ref, hash-set!, hash-remove!, hash-count,hash-copy, hash-map, hash-for-each, hash-iterate-first, hash-iterate-next, hash-iterate-value, and hash-iterate-key, respectively. The same as cleanse-path. Like collection-file-path and collection-path, but without the #:fail option.
https://docs.racket-lang.org/mzscheme/Old_Functions.html
CC-MAIN-2019-09
refinedweb
451
52.15
Traditional RC patrolling is different from Special RC patrol in that you are patrolling each edit, one by one. The Special RC patrol tool allows you to review several edits made by different contributors on a single page. This article will show you how to patrol recent changes, the "old-fashioned" way. StepsEdit - 1Click on Recent Changes under "Explore" in the green menu bar. - Optional: If you use Mozilla Firefox, you can download the Editor's Toolbar. To access RC, click on the sun, clouds or number to the right of them. - 2Set up your Recent Changes list for your patrolling preferences. You'll probably want to select "Hide patrolled edits" and you may want to adjust the number of days and edits you want to see in the list. After picking these, the page will reload. - If desired, you can also limit the namespace that is shown, or select "Reverse order" to patrol from oldest unpatrolled edits to the newest. If you pick any of these options, click "Go" to reload the page before beginning. - 3Look for exclamation points - ! - indicating unpatrolled edits. Click on "diff" to see a particular edit and start your patrolling. You can start at the top of the list or pick any edit you want to see further down the list. - For newly created articles and pages, marked by an N, the "diff" part of the link won't be clickable (since there are no changes to see). Just click right on the title of the article or page to view and patrol it. - 4Review the diff or page. On diff edits, the new revision of the page will be on the right-hand side, and the previous version will be on the left-hand side. Changes will be highlighted, so that you can see whether they made the page better. - Click on Mark as patrolled if the change that you see obviously improves the page (corrects spelling or grammar, adds a category, adds new information). - Click on the full edit link to make corrections or changes to the article/edit, before marking it patrolled. For example, if someone adds a few sentences, but there are some spelling errors in them, use edit to make corrections. Since this will load a new editing page, you probably want to right-click and select "Open in a new tab/window" to make these changes, so you don't lose your spot in patrolling. - Click on Skip if you're uncertain in any way. If the information added is unfamiliar to you, or if someone adds a link and you're not sure it's consistent with our external links policy, or if you're simply not sure if that edit is good or bad, please skip that edit for now. - Click on Rollback if the edit is obviously bad. If you notice that the old revision looks like it suffered from some bad edits already, try to do a manual revert using view history or skip the edit. After you rollback, you will still need to click "Mark as Patrolled" as well, to get the edit out of the Recent Changes queue and move to the next change. - Some edits may not show a rollback link, because there have been other changes to the page since they were made. In this case, it helps to explore the article history (in a new tab) to make sure the current version of the page doesn't need any editing before you mark the change patrolled. You can always edit the page in order to remove any bad information that was added, or do a manual revert using view history - For new articles and pages, the "Mark as patrolled" and "Skip" options are at the bottom of the page. On these edits, a rollback isn't possible (because there's no previous version to rollback to), but you can open the article separately to apply templates to it as needed, particularly if a new article might need copyediting ({{copyedit}}, formatting ({{format}}), or merits a nomination for deletion. After adding any templates needed, you can mark the article patrolled (or, of course, skip it if you're not sure). - 5Reach out to the editor (optional but preferable). Before taking action, click on "quick note" and leave a note for the person who made the edit. You can click on any of the buttons at the top, or write a personalized message. Keep it as friendly as possible, even if the edit they made was unhelpful. Assume good faith. - 6Keep going! After you skip or mark an edit patrolled, you'll automatically get a new recent change to patrol. Repeat the review process and keep going on through the queue.Advertisement Handy ChartEdit ShortcutsEdit - Update Scanner monitors updates. Recommended settings: - Scan every six hours. - Ignore changes less than 50 characters. Community Q&A Search Ask a Question 200 characters left Include your email address to get a message when this question is answered.Submit Advertisement TipsEdit -. - Do not revert or edit articles with an In Use ({{inuse}}) tag unless you're reverting or editing out vandalism. - Develop your own method. - Begin slowly, patrol safely, then pick up speed. - Make easy fixes and add templates or spend more time on each article. - Recent Changes is inaccessible to anonymous users. If you're anonymous, please create an account if you would like to become a patroller. - You can also use the tags feature to see/patrol particular kinds of edits. Just be on the lookout for false positives, since the filters that tag these edits aren't foolproof. The options include: - Edits by a new contributor on an FA - First contributions from new users - Possible blankings by new contributors - Possible removals of NFD tags by non-boosters/non-admins - Possible additions of links without content (depending on the context/editor, this may be a violation of the External Links Policy, but it should be handled with care; not every tagged edit is a violation) Advertisement HelpEdit - Category:Patrolling - All-inclusive link - Patroller Notice Board - To communicate with other patrollers. - Administrator Notice Board - Report RC glitches in the Miscellaneous section. About This Article Did this article help you? Yes No Advertisement
https://m.wikihow.com/Patrol-Recent-Changes-on-wikiHow-(Traditional)
CC-MAIN-2019-30
refinedweb
1,036
62.38
Welcome creates an ambiance in a game that provides for a more immersive game experience. Imagine how dull a game would be without sound effects; nothing would indicate when you fire your cannon or an explosion occurs. Sounds also can be used to increase the drama of a scene by increasing the tempo as the action increases. Sound effects also provide the same audible cues we expect in real life, such as the direction and speed of a person approaching us based on the volume, direction, and frequency of the footsteps. These sound effects add realism to the game just like proper physical behavior of objects do (something I will cover in an upcoming article about physics). Games also provide background music to make playing the game more fun. In newer games, artists who have their music included in a sound track for a popular game see an upswing of sales — it is not unusual for well-known artists to provide music for a game soundtrack. In effect, games have reached the same level as movies with the regards to the importance of soundtracks. In BattleTank2005 I want to integrate sound in the following way. First, I want standard sound effects for shooting, explosions, and engine noise, and I want that sound to be directionally accurate. What I mean is that when I am getting shot at by an enemy to my left, I want the sound to come from the left and the volume to be an indicator of the proximity of the unit. Secondly, I want to be able to play background music during game play and I want to control what music plays when in the game. This means I want to have one song play during the splash screen and game setup, another while playing the game, and yet another during the HiScore capturing (we are going to add these screens and states in a later article). I am going to cover the first requirement in this article, and then cover sound effects and playing MP3 and WMA files with the AudioVideo namespace in the next article. Before we add these features, let's review the capabilities of DirectSound. The DirectSound namespace only supports playing 2 channel waveform audio data at fixed sampling rates (PCM). While I have no idea what that really means, it is safe to state that you should use DirectSound to play short WAV files and the AudioVideoPlayback namespace for longer MP3 or WMA files. I am not going to cover how to use the sound capturing/recording capabilities of DirectSound namespace — but remember that they exist so you know where to look if your game requires recording sounds as well as playing them. The DirectSound namespace provides the ability to play and capture sound with three-dimensional positioning effects. DirectSound also provides the ability to add sound effects to the audio played or recorded. Just like in the DirectX3D and the DirectInput namespaces, the actual hardware device used is abstracted into a device class. Just like the device classes in those two namespaces, the DirectSound device uses buffers, has a cooperative level, and has device capabilities. I am not going to cover the sound effects in this article, but they are not forgotten; they will be covered in detail in the next article. A device is the interface to the audio hardware on the computer. You can either create a Device class using a default GUID (DSoundHelper.DefaultPlaybackDevice) or enumerate all the devices on a system. Like the other device classes, each enumerated device has a list of capabilities stored in a Caps structure in the Caps property for the device. Once you have chosen your device, you instantiate the device class using a specific GUID. Audio devices also have a cooperative level like the input devices did. The three possible values are: Normal, Priority, and Write Primary. These values are set via the CooperativeLevel enumeration. All the sounds in DirectX are controlled via buffers. These buffers can exist in the memory of the computer or on the sound card itself. The two buffers used in DirectSound are called the primary buffer and secondary buffer. The primary buffer contains the actual audio data that is sent to the device and is automatically created and managed by the DirectX API. The API mixes the sound in the primary buffer with any secondary buffers. If you need to interact directly with the primary buffer, make sure to change the Cooperative level of the device to Write Primary. Secondary buffers hold a single audio stream and must be explicitly created by the application. Each application must create at least one secondary buffer to store and play sounds. Each secondary buffer also has a specific waveform format (described in the WaveFormat structure), and only sound data that matches that format can be loaded into that secondary buffer. An application can play sounds of differing formats by creating a separate secondary buffer for each format and letting the API mix them into a common format in the primary buffer. To mix sounds in two different secondary buffers, simply play them at the same time and let the API mix them in the primary buffer. The only limitation to the number of different secondary buffers that can be mixed is the processing power of the system, but remember that any additional processing required will also slow down your game. We have not added any AI or physics computations, but we should be careful with the available processing power. Any secondary buffer can be used for the life of the application, or it can be created and destroyed as needed. A single secondary buffer can contain the same data throughout the entire game or it can be loaded with different sounds (as long as they match the format). The sound in the secondary buffer can be played once or set up to loop. If the sound to be played is short, it can be loaded into the buffer in its entirety (called a static buffer), but longer sounds must be streamed. It is the responsibility of the application to manage the streaming of the sound to the buffer. When a buffer is created you have to specify the control options for that buffer using the BufferDescription class. If you use a property of the buffer without first setting it in the control properties, an exception is thrown. The control options can be combined by either setting each property to true or combined in the Flag property. 1: BufferDescription bufferDescription = new BufferDescription ( ); 2: // Use the seperate properties 3: bufferDescription.ControlVolume = true; 4: bufferDescription.ControlPan = true; 5: // or combine them in the Flags property 6: bufferDescription.Flags = BufferDescriptionFlags.ControlVolume | BufferDescriptionFlags.ControlPan; To control these settings of the buffer you must first set the ControlPan, ControlVolume, and ControlFrequency properties of the buffer to true. You can then set the pan, volume, and frequency values using the buffer's Pan, Volume, and Frequency properties. Volume is expressed in hundredths of a decibel and ranges from 0 (full volume) to -10,000 (completely silent). The decibel scale is not linear, so you may reach effective silence well before the volume setting reaches true silence at -10,000. There is also no way to increase the volume of the sound above the volume it was recorded at, so you have to make sure to record the sound with a high enough volume to at least match the desired maximum volume in the game. Pan is expressed as an integer and ranges from -10,000 (full left) to +10,000 (full right), with 0 being center. The frequency value is expressed in samples per seconds and represents the playback speed of the buffer. A larger number plays the sound faster and raises the pitch while a smaller number slows the speed down and lowers the pitch. To reset the sound to its original frequency, simply set the frequency value to 0. The minimum value for frequency is 100 and the maximum value is 200,000. At this point you may think that we could use the volume, pan, and frequency settings to manipulate the sound and make it reflect the direction and distance of its origin. This was, after all, one of our original requirements. But instead of us performing the calculations to determine the relative locations and distances of each object, DirectX provides an API to do just that for us. The 3D features of DirectX allow us to locate sounds in space and apply Doppler shift to moving sounds. Note Check out this excellent explanation of the Doppler effect at Wikipedia, or use this applet to help yourself to visualize what it is. Before describing how DirectX handles sound in three dimensions, it is probably useful to talk about how we perceive sound. DirectX uses these same principles to make the sound appear as realistic as possible. We already know that DirectX uses the left-handed Cartesian coordinates system and Vectors to express position and directional information. This same system is also used by DirectSound in its computations. One important piece of information when dealing with 3D sound is that the default unit of measurement for distance is the meter and the default measurement for velocity is meters per second. You need to make sure to use a common system of measurement in the game. You can change this by setting the DistanceFactor property of the Listener3D object to a value that represents the meters per application-specified distance unit. If you have been using feet in all your calculations up to this point, simply set this value to 0.3048 (there are 0.3048 meters in a foot). Also, since we are leaving the manipulation of the sound to DirectX (as opposed to us changing the volume, pan, and frequency), we must ensure that the sound source we are using is a mono and not stereo source. Finally, make sure to set the Control3D property of the buffer to true to enable 3D sounds. DirectSound uses two objects to manage 3D sounds in the application: Buffer3D and Listener3D. Unlike the SecondaryBuffer, a Buffer3D object does not inherit from the Buffer class. Instead you create a 3D Buffer object by passing it a SecondaryBuffer in the constructor. The 3D Buffer exposes a number of properties that determine how the sound is processed. The MinDistance property determines at which distance the sound volume is no longer increased. You can also use this setting to make certain sounds appear louder even if they were recorded at the same volume (see the DirectX documentation for a detailed explanation of this). The default value for this property is 1 meter, meaning that the sound is at full volume when the distance between the listener and the sound source equals 1 meter. The MaxDistance property is the opposite and determines the distance after which the sound no longer decreases in volume. The default value for this property is 1 billion meters, which is well beyond hearing range anyway. To avoid unnecessary processing, you should set this value to a reasonable value and set the Mute3DAtMaximumDistance property of the BufferDescription to true. Finally, we can also specify values for the sound cone if the sound is directional. A sound cone is almost identical to the cone produced by a spotlight (see article 6). It consists of a set of angles, one for the inside and one for the outside cone, and orientation, and an outside volume property. Check out the DirectX documentation for more detail on sound cones. While the 3D Buffer describes the source of a sound, the 3D Listener describes the, well, listener. Just as the position, orientation, and velocity of the buffer affects the sound, so does the position, orientation, and velocity of the listener. The default listener in DirectX is located at the origin pointing toward the positive z-axis, and the top of the head is along the positive y-axis. Each application can only have one Listener3D object. To change the way the player hears the sound in your game, you manipulate the position, orientation, and velocity of the Listener. You can also control global settings of the acoustic environment like the Doppler shift or Rolloff factor. Note The DopplerFactor and RolloffFactor properties are a number between 0 and 10. Zero means that the value is turned off. One represents the real world values of these acoustic effects. All other values are multiples, so that a 2 means doubling the real world effect, 3 means tripling it, and so on. In BattleTank2005 we are going to add three new classes; a SoundDevice class to represent the actual audio device, a SoundListner class to encapsulate the Listener3D object, and a SoundEffects class to represent each separate sound effect. Each tank in BattleTank2005 can have multiple sounds associated with it, such as engine noise and the noise made when firing. To integrate sound we are going to update the Tank class to play these sounds at the appropriate time and with the appropriate position information. I did not add this functionality to the UnitBase class because the stationary objects (the obstacles) will not have any sounds associated with them. Before we start adding these classes we need add a reference to the Microsoft.DirectX.DirectSound.DLL assembly. Make sure to choose the 1.0 and not the 2.0 version. BattleTank2005 has not yet been updated to use the beta versions of the 2.0 DirectX API. Add a new class called SoundDevice and add the using statement for Microsoft.DirectX.DirectSound. All three of the classes will implement the IDisposable interface and use the same Dispose pattern that we used in the Keyboard and Mouse classes. (I omitted that portion from the code samples below to keep them more compact and easy to understand). Following the now familiar patterns we add a private variable called _device to the SoundDevice class and instantiate it in the constructor. We also need to set the CooperativeLevel before we can use the device. As discussed earlier, we will use CooperativeLevel.Priority setting. We also pass a reference of the game form to this method so that the device can receive Windows messages. Finally, we surround the device creation code with a Try/Catch, since things can go wrong whenever we work with devices. 1: using System; 2: using Microsoft.DirectX.DirectSound; 3: 4: namespace BattleTank2005 5: { 6: class SoundDevice : IDisposable 7: { 8: public SoundDevice( System.Windows.Forms.Form parentForm ) 9: { 10: try 11: { 12: _device = new Microsoft.DirectX.DirectSound.Device(); _device.SetCooperativeLevel(parentForm, CooperativeLevel.Priority); } catch 13: { 14: // Can not use sounds 15: } 16: } 17: 18: public Microsoft.DirectX.DirectSound.Device AudioDevice 19: { 20: get { return _device; } 21: } 22: 23: private Microsoft.DirectX.DirectSound.Device _device; 24: } 25: } The next class we need is the SoundListener class. After adding the using statements for DirectSound and DirectX, we construct the class by passing a reference to the SoundDevice class created earlier. The Listener needs to be associated with the Primary Buffer of the audio card so it can "hear" the final mixed sounds. Even so, the SoundListener class is largely passive, so we do need to update its position, velocity, and orientation to enable 3D sounds on the buffer. Once the buffer is created we pass it to the constructor of the Listener3D class to create the Listener. The final step is to store the Listener3DSettings of the listener in a local variable so we can access them in the Update method. The Update method provides the way in which we pass the position information to the Listener object. The last step is to apply the updated values to the Listener. We do this by calling the CommitDeferredSettings method of the Listener3D class after updating the Listener3DSettings values. This method simply commits all changes made to the Listener3DSettings since the last time the method was called. When all is done, the SoundListener class is positioned at the correct location in the game. 2: using Microsoft.DirectX; 3: using Microsoft.DirectX.DirectSound; 4: 5: namespace BattleTank2005 6: { 7: class SoundListener : IDisposable 8: { 9: public SoundListener(SoundDevice soundDevice) 10: { 11: BufferDescription bufferDescription = new BufferDescription(); 12: bufferDescription.PrimaryBuffer = true; 13: bufferDescription.Control3D = true; 14: 15: // Get the primary buffer 16: Microsoft.DirectX.DirectSound.Buffer buffer = new Microsoft.DirectX.DirectSound.Buffer(bufferDescription, soundDevice.AudioDevice); 17: 18: // Attach the listener to the primary buffer 19: _listener3d = new Listener3D(buffer); 20: 21: // Store the initial parameters 22: _listenerSettings = new Listener3DSettings(); 23: _listenerSettings = _listener3d.AllParameters; 24: } 25: 26: public void Update(Vector3 position ) 27: { 28: _listener3d.Position = position; 29: _listener3d.CommitDeferredSettings(); 30: } 31: 32: private Microsoft.DirectX.DirectSound.Listener3D _listener3d; 33: private Listener3DSettings _listenerSettings; 34: } 35: } Now that we have classes representing the audio device and a class to "listen" to the sounds, we need to actually create the sounds. For this we add the SoundEffect class. Once again we need to add the using statement for the DirectSound namespace. We also need to add a using statement for the DirectX namespace because we are going to use the Vector3 class. A SoundEffect is created by passing a reference to the SoundDevice and a path to a WAV file containing a sound. In the constructor, we create a BufferDescription object that will "turn on" all of the capabilities we want. For right now we want the ability to control sounds in 3D, adjust the volume, and change the frequency. Next we create a secondary buffer passing in the path to the sound file and the BufferDescription we just created. To create the Buffer3D that we need to play 3D sounds, we pass the SecondaryBuffer to the Buffer3D class on instantiation, and voila! we have a 3DBuffer. Once the Buffer3D class is set up we can access the properties and change the settings. We set the MaxDistance to a more manageable number. This setting combined with the Mute3DAtMaximumDistance setting on the SecondaryBuffer ensures that sounds too distant to hear are not played at all and don't consume any processing cycles. To actually use the SoundEffect in the game we provide three public methods: one to play the sound one to stop the Sound (if it is looping) and one to update the class. The Play method either plays the sound once or loops the sound depending on the setting of the _isLooping variable. If the sound is looping, we need to be able to stop it, and that is what the Stop method does. The Update method is the way in which we pass the updated position of the tank the sound is associated with to the sound buffer so it can be accurately calculated. 3: using Microsoft.DirectX; 7: public class SoundEffects : IDisposable 9: public SoundEffects( SoundDevice soundDevice, string soundFile) 12: bufferDescription.Control3D = true; 13: bufferDescription.ControlVolume = true; 14: bufferDescription.ControlFrequency = true; 15: bufferDescription.Mute3DAtMaximumDistance = true; 16: 17: try 18: { 19: _secondaryBuffer = new SecondaryBuffer(soundFile, bufferDescription, soundDevice.AudioDevice); 20: _3dBuffer = new Buffer3D(_secondaryBuffer); 21: _3dBuffer.MaxDistance = 10000; 22: } 23: catch 24: { 25: // Can not use sounds 26: } 27: } 28: 29: public void Update(Vector3 position){ 30: _3dBuffer.Position = position; 31: } 32: 33: public void Play() 34: { 35: if (_isLooping == true) 36: _secondaryBuffer.Play(0, BufferPlayFlags.Looping); 37: else 38: _secondaryBuffer.Play(0, BufferPlayFlags.Default); 39: } 40: 41: public void Stop() 42: { 43: _secondaryBuffer.Stop(); 44: _secondaryBuffer.SetCurrentPosition(0); 45: } 46: 47: public bool IsLooping 48: { 49: set { _isLooping = value; } 50: } 51: 52: public int Volume 53: { 54: set { _secondaryBuffer.Volume = value; } 55: } 56: 57: public int Frequency 58: { 59: set { _secondaryBuffer.Frequency = value; } 60: } 61: 62: private SecondaryBuffer _secondaryBuffer; 63: private Buffer3D _3dBuffer; 64: private bool _isLooping; 65: } 66: } Now we need to integrate these new classes into the overall game. The first step is to add a private variable to the GameEngine class to hold a reference to the SoundDevice and SoundListener classes because we can only have one of each of these classes. At the bottom of the GameEngine class, add the following code: private SoundDevice _soundDevice; private SoundListener _soundListener; Next, we need to instantiate each class. The Initialize class of the GameEngine is the perfect spot for this, but to keep sound-related items together we will create a method called ConfigureSounds. In the GameEngine class add the following method: private void ConfigureSounds() { _soundDevice = new SoundDevice(this); _soundListener = new SoundListener(_soundDevice); } Now add a call to the ConfigureSounds method to the Initialize method of the GameEngine right after the call to the ConfigureDevice method. ConfigureDevice(); ConfigureSounds() With regard to the SoundDevice class, this is all we have to do. After creation all we need this class for is its reference to the Device object. The SoundListener class, however, needs to be updated with the correct position information in each frame. The Render method of the GameEngine class is the perfect place for this. We simply pass in the position of the camera class since it represents our location in the game. In the Render method of the GameEngine class, add the following code immediately after the call to the Update method of the camera. _soundListener.Update(_camera.Position ); Each tank gets a unique instance of the SoundEffect class for each sound it can make. First we need to add some private variables to hold the two sound effects we want the tank to make. At the bottom of the Tank class add the following code: private SoundEffects _engineSound; private SoundEffects _fireSound; To make sure each tank has the necessary sound we will pass them in the constructor. Update the constructor of the Tank class to look as follows (the new code is in bold): 1: public Tank(Device device, string meshFile, Vector3 position, float scale, 2: float speed, SoundEffects fireSound, SoundEffects engineSound) 3: : base(device, meshFile, position, scale) 4: { 5: _speed = speed; 6: _engineSound = engineSound; 7: _engineSound.IsLooping = true; 8: _engineSound.Play(); 9: _fireSound = fireSound; 10: } Once the sound effects are associated with the tank instance we start the engine sound after setting it to looping. The next step is to change the Update method of the Tank class to add updating the 3D buffer with the position of the tank. Add the following code to the end of the Update method: if (_engineSound != null) { _engineSound.Update(base.Position); } The shooting sound of the tank is not continuous, and whether or not the tank shoots will be determined by the AI we are going to add later on. For now, all we are going to do is create the necessary framework to play the shooting sound. Add the following method to the Tank class: public void Shoot() { if (_fireSound != null) { _fireSound.Update(base.Position); _fireSound.Play(); } } The final step for updating the tank class to be sound ready is to update the CreateTanks method of the GameEngine. We need to create the SoundEffect for the engine noise and the firing noise, and pass them to the tank class on creation. Change the CreateTanks method to look as follows: private void CreateTanks() { _tanks = new List<UnitBase>(); SoundEffects engineSound1 = new SoundEffects(_soundDevice, @"EngineSound1.wav"); engineSound1.Volume = -1000; engineSound1.Frequency = 100000; // Set to a value between 100 - 200000 SoundEffects engineSound2 = new SoundEffects(_soundDevice, @"EngineSound2.wav"); engineSound2.Volume = -1000; engineSound2.Frequency = 10000; // Set to a value between 100 - 200000 SoundEffects fireSound = new SoundEffects(_soundDevice, @"Fire.wav"); Tank newTank1 = new Tank(_device, @"bigship1.x", new Vector3(0.0f, 20.0f, 100.0f), 1f, 10.0f, fireSound, engineSound1); Tank newTank2 = new Tank(_device, @"bigship1.x", new Vector3(100.0f, 20.0f, 100.0f), 1f, 10.0f, fireSound, engineSound2); _tanks.Add(newTank1); _tanks.Add(newTank2); } This is where you can experiment with changing the volume or frequency of the sound effect. Fly around and get closer to each spaceship, and the sound will vary in direction and volume according to your position. Change the frequency, and the pitch will change, and you can guess what the volume setting does. When you play BattleTank2005 now, you should hear the engine noises of the various enemy tanks properly adjusted for their spatial relationship to the listener (you). At this point, we have integrated the first set of the audio features we want to add to BattleTank2005. You should definitely experiment with the various settings to hear their effect. I have left several optional settings commented in the code so you can easily change them. In the next article we are going to cover how to use the built-in sound effect manipulation of DirectSound to change the sounds, and how to play a regular MP3 file for the soundtrack using the AudioVideoPlayback namespace. Until then: Happy coding! If you would like to receive an email when updates are made to this post, please register here RSS Hi ! Thank you for this great series of articles... but what happened to parts VI & VII ? They're nowhere to be found... Cheers, Robert Hello! This has been a wonderfull tutorial. However, are the next parts comming any time soon? Cheers! This is an amazing series please continue!! I have tried everything I know to print these articles because it is too difficult to read them online while trying to do the tutorials. I'm having a problem in that only the first page prints while all subsequent pages, beginning with page 2, will print. How can I get a printer coopy of each of these tutrials. Thanks, You've served us a fine introduction here. Keep up the good work. BTW: you are talking about another part in the summary but is has been quite a while since the last part. Is there a chance to get another great article or is this our ticket into the wilderness? Regards, TRL I've followed everything up until this point. However, no terrain is being rendered - even when using the code downloaded from this site. This happens with the code from parts 6, 7, and 8. There are certain camera angles that cause very odd rendering...very odd. You should add an "overview" section at the end of every article that should how the code should look like. Great series, but wath happened to the Visual Basic code? In this article there's only C#. Another question for you is there a way to check if the computer this game is running on has DirectX9 installed? OH Thank FOR This Web And programer very much i will not froget for you good When's your next article coming up. BTW, your tutorial is excellent. PingBack from What is better to support this game- Visual Basic or Visual C#, because i work with Visual Basic a lot and i have no former experience with visual c#. Otherwise great game! Thanks! My computer can't run it. It crashed when I runed it hi people, I'am a programmer, just learning C# and i'am having prblems with the game (might be me though ) ok. can any one tell me all the controls ? because F2 only puts the graphics on. + how do i shoot ? can somebody help me ? Regards Owen. This is Part 2 of an introductory series on game programming using the Microsoft .NET Framework and managed Welcome to the tenth article on beginning game development. In the last article we manipulated sounds This is Part 6 of an introductory series on game programming using the Microsoft .NET Framework and managed Welcome to the seventh article on beginning game development. In this article, we are going to refine PingBack from
http://blogs.msdn.com/coding4fun/archive/2006/11/06/999786.aspx
crawl-002
refinedweb
4,583
54.32
Created on 2004-03-11 23:50 by dmgass, last changed 2004-08-29 22:40 by tim.peters. This issue is now closed. lib/difflib.py: Added support for generating side by side differences. Intended to be used for generating HTML pages but is generic where it can be used for other types of markup. tools/scripts/diff.py: Added -m option to use above patch to generate an HTML page of side by side differences between two files. The existing -c option when used with the new -m option controls whether contextual differences are shown or whether the entire file is displayed (number of context lines is controlled by existing -l option). NOTES: (1) Textual context diffs were included as requested. In addition, full and contextual HTML side by side differences (that this patch generated) are also included to assist in analyzing the differences and showing an example. (2) If this functionality is worthy of inclusion in the standard python distribution, I am willing to provide more documentation or make modifications to change the interface or make improvements. (3) When using Internet Explorer some font sizes seem to skew things a bit. Generally I've found the "smallest" to work best. If someone knows why, I'd be interested in making any necessary adjustments in the generated HTML. Logged In: YES user_id=995755 In the time since submission I've found that the interface to the chgFmt and lineFmt functions (arguments of mdiff) should include both the line number and an indication of side (from/to). The use for it I've found is for dropping anchors into the generated markup that it can be hyperlinked from elsewhere. Logged In: YES user_id=995755 Also, I will need to submit the fix for making it behave nice when there are no differences!! Logged In: YES user_id=764593 Thank you; I have often wished for side-by-side, but not quite badly enough to write it. That said, I would recommend some tweaks to the formatting. "font" is deprecated; "span" would be better. On my display, the "next" lines don't always seem to be counted (except the first one), so that the anchors column is not lined up with the others. (This would also be fixed by the separate rows suggestion.) Ideally, the line numbers would be in a separate column from the code, to make cut'n'paste easier. (Then you could replace font.num (or span.num) with td.linenum.) Ideally, each change group should be a separate row in the table. (I realize that this probably means a two-layer iterator, so that the line breaks and blank lines can be inserted correctly in the code columns.) Logged In: YES user_id=995755 I just attached an updated patch. I based the patch on diff.py(CVS1.1) and difflib.py(CVS1.20) which was the latest I saw today on viewCVS. The following enhancements were made: 1) user interface greatly simplified for generating HTML (see diff.py for example) 2) generated HTML now 4.01 Transitional compliant (so says HTML Tidy) 3) HTML color scheme for differences now matches that used by viewCVS. 4) differences table now has a row for each line making the HTML less susceptible to browser quirks. 5) took care of all issues to date enumerated here in this patch. It would be great if I could get some help on: A) getting some JavaScript? written to be able to select and cut text from a single column (right now text is selected from the whole row including both "from" and "to" text and line numbers. B) solving the line width issue. Currently the "from" / "to" column is as wide as the widest line. Any ideas on wrapping or scrolling? As of now the only feature I may want to add in the near future is optional tab expansion. Thanks to those that have commented here and emailed me with suggestions and advice! Logged In: YES user_id=89016 Submitting difflib_context.html to validator.w3.org gives 2333 errors. The character encoding is missing and there's no DOCTYPE declaration. The rest of the errors seem to be mostly related to the nowrap attribute (which must be written as nowrap="nowrap", as this is the only allowed value). Furthermore the patch contains a mix of CR and CRLF terminated lines. Logged In: YES user_id=995755 With the updated patch code I just posted, both full and contextual differences pass the validator.w3.org check (XHTML 1.0 Transitional). Also the extra carriage returns put in by DOS were removed from the difflib.py patch. Logged In: YES user_id=21627 The pack lacks changes to the documentation (libdifflib.tex) and changes to the test suite. Please always submit patches as single unified or context diffs, rather than zip files of the revised files. Logged In: YES user_id=995755 Since I have not gotten any feedback on the user interface I took the liberty to tweek it the best I thought how. Since I consider this new functionality very solid I went ahead and created changes to the documentation and test suite. Per Martin v. Löwis (loewis) instructions I generated a patch which hopefully meets his needs. My CVS access is limited to the web interface and thus the patch is based on the latest checked in as of today: python/python/dist/src/Lib/difflib.py -- rev 1.21 python/python/dist/src/Tools/scripts/diff.py -- rev 1.2 python/python/dist/src/Lib/test/test_difflib.py -- rev 1.10 python/python/dist/src/Doc/lib/libdifflib.tex -- rev 1.17 Note, I am not very familiar with .tex but it seemed straight forward. I editted the file by hand and it should be very close to what I intended. Unfortunately I am not set up to convert the .tex to HTML. I may try that next week. Logged In: YES user_id=57486 The diffs look cool but if the to and from lines are identical an exception is thrown: File "<string>", line 23, in makeDiff File "c:\_dev\rx4rdf\rx\htmldiff.py", line 1687, in make_file summary=summary)) File "c:\_dev\rx4rdf\rx\htmldiff.py", line 1741, in make_table if not diff_flags[0]: IndexError: list index out of range (This is on python 2.3.3 -- I renamed your difflib.py to htmldiff.py and put it in site-packages) Perhaps you should add the case of two identical files to test_difflib.py. thanks Logged In: YES user_id=80475 Unfortunately, I do not have time to give this more review. Several thoughts: - consider making mdiff as private. that will leave its API flexible to accomodate future changes - move the templates to private global variables and work to improve their indentation so that the html is readable - inline the code for _escape from sax. the three replaces are not worth the interdependency - the methods work fine with properly formatted input but crash badly when the newlines have been stripped. So, either make the code more flexible or add error handling. - overall, nice job. Logged In: YES user_id=995755 I intend on implementing all the suggestions from the last two comment submissions when I hear back from Raymond Hettinger. I'm questioning making the templates private global variables. I intentionally made them members of a class so that they could be overridden when the class is subclassed. Logged In: YES user_id=764593 I agree that users should be able to override the template. I think Raymond was concerned about (1) backwards compatibility if you want to change the interface. For instance, you might later decide that there should be separate templates for header/footer/match section/ changed line/added line/deleted line. (2) matching the other diff options, so this doesn't look far more complicated. Unfortunately, at the moment I don't see a good way to solve those; using a private member and access functions wouldn't really simplify things much. Logged In: YES user_id=995755 Maybe a good compromise would be to leave them members of the class but name them to imply they are private (add a leading underscore). That way if someone wants to create their own subclass they can at their own risk. They can always write a little extra logic to insure the interface didn't change and raise a custom exception explaining what happened and what they need to look at. I didn't read (2) in Raymond's comments. Could you expand on those concerns? Logged In: YES user_id=764593 For a context or unified diff, you don't really need to parametrize it very much. Having a much larger API for side- by-side puts things out of balance, which can make side-by- side look either more difficult or preferred. Logged In: YES user_id=995755 I agree the API for make_file() and make_table() has alot of optional parameters that can look daunting. The alternative of providing accessor methods is less appealing to me. It sounds like Jim & I are in agreement on this. As far as the compromise of making the templates private members, I am assuming Jim is in favor of it. I'm in favor of them being members, it doesn't matter to me if they are private. I'll wait until Raymond weighs in on this one before I move on it. Jim -- Thanks for your input. Logged In: YES user_id=33168 I have not reviewed the code yet, so this is just a general comment. I don't know if it is applicable. You could create a configuration class that has attributes. A user would only need to assign to the params that they want to update. If you change the names you could add properties to deal with backwards compatibility. Logged In: YES user_id=764593 Actually, I'm not convinced that the templates should be private, though I can certainly see protected. (double- underscore private is mangled; single-underscore protected will be left out of some imports and doco, but acts like a regular variable if people choose to use it anyway.) I'm still thinking about nnorwitz's suggestion; the config option could be ignored by most people ("use the defaults") but would hold persistent state for people with explicit preferences (since they'll probably want to make the same changes to all compares). Logged In: YES user_id=57486 certainly you need to be able to access the templates and modify them. And should be documented too -- one of the first things i wanted to know is how can i customize the output. But a config object seems like conceptual clutter. keyword arguments can be ignored and are just as persistent if you use ** on a dict. few more suggestions: * the default font seems too large on both ie 6 and firefox (this is with 1200 x 800 screen resolution, so it'd be even larger at a lower resolution). * maybe the links column (t, f, n) should also be on the left -- often there are long lines and you need to scroll over to access it. * add class attributes to the two tables used in the templates and move their styles to there (except i believe that IE doesn't honor cellpadding/spacing styles in css) thanks Logged In: YES user_id=80475 To clarify, the private variables are just a way to move the defaults out of the class definition and give the OP more control over formatting: _legend = """ <table summary="Legends" style="font-family:Courier"> <tr> <th colspan="2"> Legends </th> </tr> <tr> <td> <table border="" summary="Colors"> <tr><th> Colors </th> </tr> <tr><td class="diff_add"> Added </td></tr> <tr><td class="diff_chg">Changed</td> </tr> <tr><td class="diff_sub">Deleted</td> </tr> </table></td> <td> <table border="" summary="Links"> <tr><th colspan="2"> Links </th> </tr> <tr><td>(f)irst change</td> </tr> <tr><td>(n)ext change</td> </tr> <tr><td>(t)op</td> </tr> </table></td> </tr> </table> """ class HtmlDiff(object): file_template = _file_template styles = _styles linenum_template = _linenum_template table_template = _table_template legend = _legend def __init__(self,prefix=['from','to'], linejunk=None, . . . Logged In: YES user_id=995755 I will be implementing the following suggestions: Raymond 7/19 & 7/23 [all] Adam 7/17 [all] Adam 7/23 [partial, but I need feedback] - Can I get a recommended solution for the font issue? - I'll likely do the additional links column on the left - Why move the attributes to a class for the two tables? Unless the template issue (public/protected/private) is discussed further, I will assume everyone is in agreement or can live with Raymond's suggestion & clarification. I will not be implementing any changes to the API right now. I lean somewhat strong toward leaving the optional arguments as they are but this can be talked about further if anyone thinks there are strong arguments to the contrary. Thanks Raymond, Adam, Jim, and Neal for your latest patch hopefully by the end of the weekend. Logged In: YES user_id=57486 > Why move the attributes to a class for the two tables? In general its good practice to separate the style info from the html. For this case in particular i had to do this because i'm embedding the diff tables in a generic page template that applies general style rules to tables in the page. By adding a class for those tables i was able to add a rule for it that overrode the more general table rule. > Can I get a recommended solution for the font issue? Not sure, I added this rule: .diff_table th, .diff_table td { font-size: smaller; } But its a bit problematic as Mozilla defaults to a considerably smaller monotype font size than IE so its hard to be consistent across browsers. Logged In: YES user_id=995755 Adam's suggestion seems to make alot of sense regarding moving the table attributes to a class. I will implement it in the next update. I will experiment with the font issue but will likely hold off on implementing anything. Adam -- thanks for the quick feedback. Logged In: YES user_id=995755 Updated the patch as follows: 1) handles no differences now in both full and context mode (Adam discovered that it crashed when no differences in context mode). 2) handles empty string lists (I'm pretty sure it would have crashed had someone tried it). 3) "links" column in the middle now appears on the left as well. 4) Moved prefix argument from constructor to make_file and make_table methods. Also made it work by default so that if you are generating multiple tables and putting them on the same HTML page there are no anchor name conflicts. 5) mdiff() function now is protected: _mdiff() so we are at liberty to change the interface in the future 6) templates moved to protected global variables (still public members of the class) so that the indenting could be improved. 7) Improved the indenting in other areas so that the HTML is now much more readable. 8) Inlined the special character escaping so that the xml.sax library function is not used (this seems to have improved the performance quite a bit). 9) Moved as many <table> attributes as possible to a style sheet class. Adam, please review this incase I missed some. 10) Expanded test suite to cover the above changes and made it easier to baseline. 11) Updated documentation to reflect above changes. NOTES N1) Raymond, you had mentioned this crashes when the newlines are stripped. I modified the test to include stripping and not and have found both to work without having to fix anything. Can you duplicate what you saw and give me more info? N2) I've noticed the HTML does not render tabs very well (at all). Is this OK or does anyone have any good ideas? N=764593 Technically, HTML treats all whitespace (space, tab, newline, carriage return, etc) as interchangable with a single space, except sometimes when you are in a <pre> block. In practice, user agents often honor it anyhow. If tab isn't working, you might have better luck replacing it with a series of 8 , ((ampersand, then "nbsp", then semicolon)*8) but I'm not sure the ugliness is worth it. Logged In: YES user_id=995755 I am considering making an optional argument to the constructor for specifying tab indentation. If nothing was passed it defaults to not handling tabs (better performance). If a number is passed, the string sequences (lists of lines) would be wrapped with a generator function to convert the tabs in each line with the expandtabs string method. The other option is to expand tabs all the time. If one is going to handle tabs it must be done on the input because once it is processed (markup added) the algorithm for expanding tabs becomes really compilicated. Any opinions regarding these options are welcome. I think I should change the default prefix (_default_prefix) to be a class member rather than initialize it with the constructor. (The default prefix is used to generate unique anchor names so there are no conflicts between multiple tables on the same HTML page). I'm leaning this way because a user may create separate instances of HtmlDiff (to have different ndiff or tab settings) but place the tables on the same page. If I left it the hyperlinks in the second table would jump to the first table. Logged In: YES user_id=764593 Your tab solution sounds as good as any. I'm not sure I understand your intent with the default context. module-level is shared. class-level is shared unless/until assigned. instance-level (including "set by constructor") lets different HtmlDiff have different values. Logged In: YES user_id=89016 Formatting one line for output should be the job of an overwrittable method. This makes it possible to implement various tab replacement schemes and possibly even syntax highlighting. (BTW, I'd like my tabs to be replaced by <span class="tab">· </span>) Logged In: YES user_id=995755 You can replace tabs with markup by overriding format_line(). The disadvantage is that doing smart replacements (such as expandtabs() does) is more difficult because there could already be markup in there which doesn't count toward the tab stops. You can accomplish Walter's substition easily by overriding format_line(). This substitution cannot be done at the front end because the markup will be escaped and displayed. I'm seeing this as a supporting argument for making the tab filtering optional on the front end (dependent on how much of a performance hit it is to do it all the time). I intend on making the default prefix class-level so that different HtmlDiff instances share (and increment) the same value to avoid anchor name conflicts between two tables placed on the same HTML page. Jim, does that help clarify? Logged In: YES user_id=995755 Unless I hear opposing arguments I will be updating the patch to handle tabs and to change the default prefix to be class-level either tonight or tommorow night. I plan on adding both pre and post processing to handle tabs. Preprocessing will be controlled by an an optional keyword argument in the HtmlDiff constructor specifying the number of spaces per tab stop. If absent or None is passed, no preprocessing will occur. Otherwise the tabs in each line of text will be expanded to the specified tab stop spacing (note, I will use the expandtabs() string method but will convert the resulting expanded spaces back into tabs so they get handled by post processing). Post processing will always occur. Any tabs remaining in the HTML markup will be substituted with the contents of a class-level variable (which can be overridden). By default, this variable will be set to ' ' These two changes will allow tab expansion to the correct tab stop without sacrificing the ability to see differences in files where tabs were converted to spaces or vice versa. It also provides a mechanism where by default, tabs are reasonably represented (or can be easily changed to your custom representation). Logged In: YES user_id=764593 Are you saying that the default will replace tabs with a single non-breaking space? Not 3-4, as in many programming environments, or 8 as in standard keyboard, but 1? No objections other than that. Logged In: YES user_id=995755 For the case where you instantiate HtmlDiff saying you want tabs expanded it will insert non-breaking space(s) up to the next tab stop. For the case where you do NOT specify tab expansion it will substitute one non-breakable space unless you override it with a different string (where you could choose 3,4, or 8 spaces). We could make 3,4, or 8 spaces the default but it makes it more complex because it would require two overridable class-level members ... spaces_per_tab = 8 tab_space = ' ' ... and the post processing would look like ... return s.replace('\t',self.tab_space*self.spaces_per_tab) ... and the pre-processing setup in the constructor would need to override the class-level member used for the post processing: self.spaces_per_tab=1 We could also use a special character for tabs. We could even attempt to use a combination of nbsp and a special character to show the tab stops. I'd need to play with re's to do that. Logged In: YES user_id=764593 If you're dealing with tabs anywhere except at the start of a line, then you probably can't solve it in a general way -- tabstops become variable. If you're willing to settle for fixed-width tabstops (as on old typewriters, still works in some environments, works fine in tab-initial strings) then tab=" " Does everything you want in a single variable for tab-initial, and has all the information you need for fancy tab-internal processing (which I don't recommend). Logged In: YES user_id=995755 Updated the patch as follows: 1) Now handles expanding tabs and representing them with an appropriate HTML representation. 2) moved default prefix to class-level NOTES N1) See _libdifflib_tex_UPDATE.html and test_difflib_expect.html for an example of how tab differences get rendered. N=995755 I think we are getting very close to having something for the next alpha release for Python 2.4. One exception is the last patch update I used a list comprehension that calls a function for every line of text. I'm thinking I should have called the function with the list and have it pass back a newly constructed list. To be sure which is the better way I want to do a performance measurement. I also would like to measure performance with and without "smarttabs". If it does not cost much I might be in favor of eliminating the option and just doing "smarttabs" all the time. In addition to performance degredation it would eliminate the ability to doing straight tab for spaces substitution (is this bad?). Logged In: YES user_id=80475 Rather than spending time on performance measurement, it is best to focus on other goals. Aim for the smallest amount of code, the neatest output, greatest ease of use, and the simplest way to change the output appearance. The size of the docs is one metric of ease of use. Ideally, it would take a paragraph or two to explain. Also, write some sample code that produces a different output appearance (XML for example). How easy was it to do. The goal is to focus the code into three parts: the underlying ndiff, converting ndiff to side-by-side, and output formatting. Logged In: YES user_id=995755 > Rather than spending time on performance measurement, it is > best to focus on other goals. Aim for the smallest amount > of code, the neatest output, greatest ease of use, and the > simplest way to change the output appearance. Noted. > The size of the docs is one metric of ease of use. Ideally, > it would take a paragraph or two to explain. So far I've patched the libdifflib.tex file with about that amount of material. It details the "out-of-box" methods for generating HTML side by side differences. It does not address templates that we are leaving public to adjust the output. Should the documentation be left simple with just a reference to the doc string documentation within the module for further information about using the templates to adjust the output? > Also, write some sample code that produces a different > output appearance (XML for example). How easy was it to > do. The _mdiff() function could be used by those interested in doing side by side diffs with other markup such as XML. Previously you had mentioned that we should hide this function for now so we can reserve the right to change the interface. Truthfully I did not mind this decision because I don't think there is much need for it and it does avoid alot of documentation work to explain how to use it :-) > The goal is to focus the code into three parts: the > underlying ndiff, converting ndiff to side-by-side, and > output formatting. 1) ndiff() is what it is and I had no need to change it. 2) converting ndiff() to side-by-side is handled (and packaged neatly) by _mdiff(). The code in my opinion is well written and well commented. It is not public code so documentation for the python manuals is not required. 3) There has been a great deal of discussion regarding output formatting (early on a fair amount of it was done through private emails, but as of late everything is being logged here). IMHO I think the interface has shaped up very well, but I am still open for more suggestions. To date most of the feedback I have gotten is on the API and the output. I haven't heard much about the code. Logged In: YES user_id=995755 Is this patch being considered for going into Python 2.4 (and hence being checked into an alpha release)? FWIW, Tim Peters was +1 in favor of having this functionality in Python (even before the API was significantly enhanced) before washing his hands of it. I'd be interested in knowing whether this is going into 2.4, being shelved until the next release or whether the development/distribution of this functionlity should be moved to its own project. I'm getting a little anxious because I know an alpha release is in the works this week but I don't know how many more opportunities we have beyond that for 2.4. (I'm assuming there will be a long time before the next one.) Are there any more outstanding issues with it that need to be resolved? I am currently under the assumption no one has any objections or further recommendations. Logged In: YES user_id=21627 Given that this code was developed from scratch in this tracker, and given the loads of changes it has seen, I'd be in favour of postponing it after 2.4. Logged In: YES user_id=31435 The first version I saw from Dan worked fine for my tastes, and he's been very responsive to the unusually large number of suggestions made by the unusually large number of reviewers. The number of reviewers demonstrates real interest, and while all have their own UI and API agendas to push, I don't see anyone opposed to the patch. This should go in. Logged In: YES user_id=764593 Martin -- there have been a fair number of changes, and I agree that major new functionality is often asked to cook for a release outside the core. That said, the changes were mostly the sort of things that could be done even after it was in the core; they're just being done early. So I'm inclined to agree with Tim in this particular case. I feel even more strongly that if it doesn't go in 2.4, then it should at least be targeted for 2.5, rather than an indefinate "future". Also note that (1) This doesn't require much in the way of changes to the previous code; it could be backed out if there are problems during the testing phase. (2) there are a few alphas left, plus betas, and the author has been very fast with fixes; if there are problems, I'm confident that he can fix them in plenty of time. Logged In: YES user_id=995755 FWIW here is some background that may help determine your comfort level: The basis of this code was written late last year. Just before submitting it as a patch early this year I cleaned it up quite a bit. Since that time its "core" _mdiff() has been extremely stable. I (we) have been using it a fair amount where I work without problems (early on its use flushed out some additional requirements). Obviously I've been using it quite a bit in this patch to show what I'm changing. All of the changes to date have been to 1) improve the API to make it really easy to use yet still flexible to customize 2) change the output to meet XHTML 1.0 Transitional standards and 3) improve output HTML functionality and 4) correct some corner case bugs. Jim's notes are correct, 1) this patch is strictly an addition to difflib.py as it required no changes to the existing code and 2) I am currently only involved in this patch and my config-py sourceforge project so providing support has not and will not be a problem. Logged In: YES user_id=995755 I did a rough performance measurement where I addressed a concern I had in a previous comment (function call in a list comprehension). The time difference was so small I could not determine which was better. Until I hear complaints about its speed I am not planning on looking into performance improvements. As far as the "smarttab" option, I have not gotten any feedback. I'm inclined to leave it as is because it allows the most flexibility. I will not be monitoring this patch (or my email) from now until next Monday (August 9). Logged In: YES user_id=995755 Raymond -- I am new to the Python-Dev community so I can only assume you would be the one to apply this patch and check it into CVS since this patch is assigned to you. Because of that and the fact that you have looked it over to some degree I would like to hear your opinion on when you think this should get it. It would be helpful to know if it is going in 2.4 and what work needs to be done on my part. I would rather do the work sooner than later for everyone's comfort level. It may be a good break from reading about @|decorators :-) I have found documentation on how to generate the HTML documentation from the .tex file I patched in order to check my work more accurately. My suspicion is that Linux is the platform that would be easiest to build the documentation since it would already have most of the tools (I have a Suse Linux system at home but am not very familiar with it). Windows is still the most convenient platform for me to work under right now. Unfortunately I do not yet have broadband (or internet) on my Linux home system only dial up on my home Windows system :-( and high speed on my Windows system at work. Any suggestions on which platform to use and the easiest way to obtain all the source files to get started would be appreciated -- I can be reached directly at dan.gass_at_gmail_dot_com. Logged In: YES user_id=995755 Quick poll to interested parties: Do you want me to add logic to handle line wrapping now, later, or never? Its been something I wanted to address and I thought of a relatively easy way to do it this morning while in the shower. In the code that inserts the XHTML markup I would put temporary markers (perhaps \0,\1) around the markup so that I could easily count visible characters accurately and perform the break. I'd add a "wrapcolumn" attribute to the __init__ method that would default to None (no wrapping). I would break on the exact column number (not worrying about breaking on whitespace). I'll also put some type of continuation marker in the line number column (probably a single ">"). I did get a hold of Raymond. He suggests recruiting either Tim or another developer to work on inserting the patch as his time is tight right now. I'll put out a plea for help on Python-Dev this weekend (I'll wait until after I implement line wrapping if feedback is positive) unless I'm lucky and get a volunteer based on this posting. Raymond recommended a "tex" checker script that comes in the standard distribution to run on my modifications to the .tex documentation file. My changes are OK. Logged In: YES user_id=764593 (1) I don't have checkin privs, so I can't help there. (2) Line wrapping is tricky - I couldn't think of a good way to do it, which is why I didn't mention it earlier. Your way sounds as good as any, so long as it is easily overridden (including a "no wrap" option) Logged In: YES user_id=995755 I'm pretty far implementing the (optional) line wrapping feature, I'll post it in the next day or two. Assuming this goes into the next alpha, does anyone have any objections to changing everything (except some of the templates) to be protected (via naming everything with a leading underscore) for this release? It would discourage anyone from counting on any of the internal workings so we would be at liberty to change it in future releases should anything shake out from its more widespread use. I think the templates and the API allow enough control to tailor the output without exposing the remainder of the implementation. Logged In: YES user_id=80475 +1 on exposing as little of the API as possible. Logged In: YES user_id=357491 +1 on minimizing the API as well. Easier to expose more of an API later than to retract any part of it. Logged In: YES user_id=995755 I'm still in the process of implementing the line wrapping and am gaining a greater appreciation for minimizing the API (in this process I have been simplifying some of the code). Based on this and the feedback so far I am proposing the following API simplifications: class HtmlDiff(): I will be making everything protected by convention (adding a leading underscore) except for the make_file() and make_table() methods. This should warn those looking at the internals that things may change in future releases. We may want to consider making additional public interfaces in the future when more experience is gained. HtmlDiff.__init__(): 1) remove 'smarttabs' argument (I will always expand tabs using expand_tabs() string method). HtmlDiff.make_file(): HtmlDiff.make_table(): 1) remove 'fromprefix' and 'toprefix' arguments (vast majority of applications don't need this, for now corner cases can solve it with a string search/replace algorithm after the fact). 2) remove 'summary' argument (this added a summary attribute to the <table> tag and I believe would not be used much if at all). A user could always do a string search and replace on the <table> tag to insert this after the fact. HtmlDiff.make_file(): 1) leave 'title' and 'header' arguments alone (I could be talked into removing these). These arguments are for controlling the window title and any markup to be inserted just above the table. Although these could be inserted after the fact using string search and replace methods I think these will be commonly used and should be convenient (plus they are easy to understand). ANY OTHER API SIMPLIFICATION IDEAS? Logged In: YES user_id=357491 Don't have to look this up to see if this is already supported, but the only thing that I could see people wanting is a way in inject their own stylesheet. Otherwise it sounds good to me. Logged In: YES user_id=995755 Should we expect the user to search/replace the style sheet in the generated HTML? I'm on a simplification kick and this would keep the API from getting bigger. I'm inclined to do the same for title and headers so I could eliminate them from the API. I have completed the line wrapping functionality and the simplifications/streamlining talked about to date. I'm currently testing and updating the documentation. Hope to post it late tonight or tommorow. If there is anything else that should be changed, now would be a good time to speak up! Thanks again for everyone's help. Logged In: YES user_id=995755 Updated the patch as follows: 1) added optional line wrapping using a wrapcolumn argument to make_file and make_table methods. 2) eliminated a number of optional arguments to simplify API (as discussed). 3) made everything protected (by naming convention) except for the public make_table and make_file methods (as discussed). NOTE month or so back: python/python/dist/src/Lib/difflib.py -- rev 1.21 python/python/dist/src/Tools/scripts/diff.py -- rev 1.2 python/python/dist/src/Lib/test/test_difflib.py -- rev 1.10 python/python/dist/src/Doc/lib/libdifflib.tex -- rev 1.17 NOTE2 I will not be monitoring emails or this patch from Saturday Aug21 thru Wednesday Aug25. When I break my internet silence I'll see if I've blundered this update (unlikely because I tested it pretty well but none the less possible). If everything went well I will be soliciting Python-Dev to try to get this in alpha3. I need someone with checkin privledges to do it. Should I be trying to gather more support for its inclusion or does this only need to be done to convince someone with checkin privledges to apply the patch? Logged In: YES user_id=21627 Based on Tim's approval, I have applied this patch as libdifflib.tex 1.18 difflib.py 1.23 test_difflib.py 1.11 test_difflib_expect.html 1.1 ACKS 1.279 NEWS 1.1118 diff.py 1.4 If you want to make further changes to the code, please submit them as diff against the current CVS, in a new SF patch. Please generate this diff through "cvs diff -u" if possible, as this will put the proper file path into each chunk, so that patch does not need to ask what file each chunk applies to. Logged In: YES user_id=31435 Dan, there's a problem you need to fix: all .py files in the core are run thru reindent.py, which, among other things, chops invisible trailing whitespace off lines. Some of your new lines in difflib.py and in test_difflib.py do have invisible trailing whitespace (especially inside triple-quote strings), and test_difflib fails when that junk is removed. Repairing this is probably just a matter of running reindent.py on those files, then generating a new test_difflib_expect.html. Please verify that's all that's needed. If you say it is, I'll do that and check in the result. Else we'll have to revert the checkin. Logged In: YES user_id=995755 Tim, sorry for the inconvenience. I ran reindent.py over all the .py files and duplicated what you saw -- test_difflib.py does fail due to white space being removed from triple-quoted template strings. You are correct, the proper action is to run reindent.py over the .py files and regenerate test_difflib_expect.html. (It can be done by temporarily uncommenting three lines in test_difflib.py on the lines following "# temporarily uncomment".) I went through this exercise and verified the new expectations are OK. (After regenerating, you may want to open test_difflib_expect.html just to make sure it renders reasonably for a sanity check.) I would volunteer to generate a new patch file but I do not have direct access to the CVS archives and this would probably create as much work for you as I save. I appreciate your offer to help. If there is anything else I should do let me know by posting here or emailing me directly at dan.gass at gmail.com. Thanks, Dan Logged In: YES user_id=31435 Thanks, Dan! I checked the changes in. The new HTML file wasn't obviously damaged, but since I'm not intimately familiar with what it's intending to show, I could well be missing something. If it looked OK to you when you tried it, I'm not gonna argue <wink>. Looks good!
https://bugs.python.org/issue914575
CC-MAIN-2020-10
refinedweb
6,802
71.44
Being interested in opinions on the rights, wrongs, virtues and demerits perceived by many communities, I found the following article to be of some interest. If you read the article, you start from an offered conclusion that student text books ought to be available under open source type Creative Commons licenses. But as you dig further down, you find out that students are complaining about the cost of text books. The arguments seem to be: - text books are too expensive anyway; - you can’t print out much at a go; - they have a short life; - they cost as much as the print editions. So the complaints seem to be much more like those being used on film and music companies than they are about DRM itself. Now I would have to agree that it seems very strange that it costs me the same to buy a book that I have for ever, or an electronic book which I only get to use for 6 months. At the same time the electronic book supplier might be offering me something rather different with the electronic book – things I just can’t do with the paper edition. Searching (it saves me having to read the whole thing although maybe I actually learn less?) and hyperlinking to other reference work, articles, forums and so on are things I just don’t get on paper, or images I can work with. Why should I get upset about not being able to print the ebook? If I want a print edition then surely that is what I should have purchased to begin with. If I want to print information then I should expect to pay a premium (normally called a royalty) for the right to make copies of the book. That’s totally normal. And just the same as in the software world. It isn’t realistic to think I can buy some software for one machine and then put it on as many machines as I suddenly decide – hey man, that’s piracy. So I am not totally convinced by the student’s arguments. Yes, we all moan about the price of things, from cars to condos and from tuna to text books. And that’s normal, good and healthy. And in the bargaining business you always start from an extreme position if you think you can get away with it, so, of course, a study that asks the students the right questions will get whatever result they feel like. I’m equally certain that if you asked the College Professors who specified the books for the term and the course (and maybe even wrote them?) they would not be wanting to give away their work, even though they were paid to gain their expertise. And if you asked the publishers, they will have a view all of their own (but don’t ask me what it is because I haven’t asked them). So bottom line, it’s popular to attack DRM because, hey, it stops me from doing my own thing. Everyone seems to think that buying something grants you the right to use what you bought anyhow you want. Well, that isn’t the case with automobiles, guns, software, drugs, or a whole load of other things. And so far there’s no proof ebooks are any different. As a closing thought, the other day I had to spend 350 bucks to buy a paper book that is a definitive reference on company valuation methods. It is out of print, and there are only two companies who can supply it from stock. There are no electronic versions, and it is not in the library. Next? I would have liked an ebook version with search and hyperlinks but it isn’t available. And I bet if it was it would cost a damn site more than the paper edition – and I would have paid it!]]> The other day we see reported in the press (in this case SC Magazine) that yet again personal data held by the UK government has leaked its way out into the public domain. In the past we have managed to hit tax claimants, car drivers, and student doctors. But just as we thought things were getting better they and their partners managed to hit a most unlikely and vulnerable sector of society – criminals. It might well be jingoism to say, “Well, why should they get protection – so who cares.” But as we know, it turns out that there are wrong convictions, and not every prisoner is a paedophile, murderer, rapist or thief. Some are just people who couldn’t pay the mortgage. And would we want people to be readily exposed to blackmail because of their pasts? Making it easier for other criminals to recruit them when they have served the punishment society exacted? Probably not a good idea. What it also does is reinforce genuine and growing concern that information professionals and management simply do not have any grip on protecting the privacy of information. They may (and the Press may say they may not) have some grasp about access controls. But there seems to be no clue about how to stop the leaking and spreading of information that has been entrusted to them to manage. And the biggest collectors and processors of really high value and fairly accurate personal data are - governments, whether national or local. So the nightmare scenario is that governments give themselves the right to collect more and more personal information about people (not just their citizens), and consolidate it under the guise of (pick one or more for your own country as required) identity control, taxation, prevention of terrorism, then they are at the same time creating the opportunity for even bigger targets for hackers, and ever bigger losses of personal data. But whilst there seems to be no effective way, if news stories are correct, of persuading government officials and the companies working for them to raise standards and take personal responsibility for losses, then you can forget having personal data.]]> Now even consultants would normally be hard pressed to find an argument that customers want DRM controls protecting information. If you believe the modern anti-DRM blog sites, DRM is akin to the works of the (please pick a suitable negative deity suited to your particular persuasion). Imposing controls on honest citizens is an affront to their dignity (let’s just not worry about speed traps because we don’t believe you can be trusted to obey the law). But, and this is serious, there are sectors of the community that want the protection of DRM controls. One important group is the individuals or organizations that are buying training courses so that they can train themselves or their people so they can carry out licensed services and demonstrate that they have proven capabilities and expertise. They are investing serious money in the enhancement and development of their staff and their businesses. The last thing they want to see is competitors being able to undercut them because they haven’t paid the proper fees. That is unfair competition from the unethical and unscrupulous. Why should the honest and law abiding suffer at the hands of the dishonest? Or is that the purpose of hacking? Another group who demand DRM controls are those receiving confidential information. Because when confidential information leaks out, the finger pointing starts, and absent DRM controls that can act to identify the actual source of a leak, then the selection of a scapegoat can commence. So the presence of DRM controls helps protect the recipients of information, because it can help prove they were not the source of any leak or compromize. And when it comes to keeping your reputation as well as your job, DRM can prove invaluable. And a third group who want DRM controls are those who need to receive information but the owner of the information wants some actual certainty the information will be looked after properly. In this group are the people who have to send out personal data that has to be human accessible. This is the fastest growing group, because US regulation is now getting much more stern about encrypting personal data in computers. It is now specifying that there has to be some management and control, and is going for tougher penalties to incentivise compliance. Naturally, people sending out DRM controlled information are better placed to easily prove they did a good job. So before you show me another web site about how bad DRM is and how it is that nothing should be DRM protected, just think for a moment about the fact that it is essential in some industries, and, that without it, your personal data can still be handed around without anyone being able to stop it.]]> Sophocles, in his work Antigone, said, "No one loves the messenger who brings bad news." This week provided more than it’s share of light entertainment with blogs of disinformation. Top of the list goes to the people who figure that Lizard Safeguard ought to work on every operating system known to man (or is that persons?). It says clearly on the box, works with PC and Mac – and nothing else. It’s rather like buying a book that’s published in English and then complaining because it isn’t in Danish (and likely never will be). It doesn’t run on an IBM mainframe either, btw. And second error prize has to go to those who say it doesn’t work on the Mac. It does. As a general rule, two out of three is pretty bad, but what about number 3? “The system is unusable because it insists on always connecting to the Internet before you can use anything.” It is true that you have to connect to the Internet the first time you open a document – it has to check that you are the bona fide purchaser, and it has to get the information needed to decrypt the document. After that, it totally depends on what the owner of the document has chosen to enforce. And nothing to do with LockLizard, because they can only enforce what the publisher defines as the rules. You don’t go round blaming Microsoft because your system administrator decided the frequency you update your logon password – so why blame LockLizard about the way the system administrator has configured their controls? But, of course, the modern attitude is to use any possible reason to try and prevent people from implementing DRM controls, even though all the available evidence proves rampant piracy and theft of digitized information. So use every possible opportunity, no matter how ridiculous, to claim rights that don’t exist. And don’t forget to blame the messenger! The DRM provider creates a toolkit that the publisher implements in whatever way they see fit. But of course the DRM provider is the evil guy because they are stupid enough to implement what the publisher said. It’s Hobson’s choice - damned if you do and damned if you don’t. If you are the DRM provider and you don’t enforce what your customer, the publisher, says, then they will soon be after you, and probably with good cause. But it seems that if you implement what the publishers want, then the people getting the information want to blame you for doing a good job! Of course the attitudes are so different if what is being published is personal data – oh you should have taken the strongest possible means to protect it – what do you mean you didn’t check who was accessing the data. And so on. Just because it’s not your personal data, it doesn’t mean you have the right to do what you like with it – or does it?]]> It has been interesting reading the blogs, analysts and industrial pundits who have decided that Web 2.0 is the best thing since ….. well, maybe Web 1.x? and that every being on the planet should endorse its concepts – sharing information, whether personal or corporate in places whether public or private. Team playing, that’s the thing. As long as we all work together the results will be bigger, better, quicker, cheaper, more FUN. To try and avoid the risk of being a party pooper I put out a few inquiries (name the usual search engines) about Web 2.0 security. After all, before I go revealing whatever it is that I decide I’m going to reveal, I’d rather like a few ideas about who might get their sticky little hands on whatever I am posting. Well, Google muscled in with (around?) 160,000,000 entries, Yahoo hoisted 316,000,000 and MSN claimed 72,800,000. I admit it. I didn’t read them all, I just didn’t have the time. But the summaries on the first few pages, regardless of who I consulted, was just the same. No security, and no plan for security. Now this is kind of worrying. We are all supposed to bring in whatever we want and then share it (knowingly or otherwise) with a group of people we may (or may actually not) know, and that’s good. Somehow I figure there’s going to be a ton of material that my CEO (wife, partner, children, friends and acquaintances – the list goes on) does not want me to give to other people. Yes, I know a couple of those photos after the hot tub might be a bit insensitive, but, hell, it happened didn’t it? And maybe I shouldn’t have published that extract about how we always get the best procurement price – but it is what we do, isn’t it? You see, that’s the problem. We all coexist with and in many groups, all at the same time. But those groups are not connected by common views, objectives, rules or members. And it’s not clear if the members of each group even share common ideals. All Americans support America, of course. Except those who crash jets, or oppose foreign wars, or …. So even when you think you know who the players are you don’t know what they will do with the information you give them, or what information you may have given them access to without your even knowing. The problem is that Web 2.0 does not have DRM controls that help you decide the limits that can be put on the use of your information. It’s certainly the information super highway – but it’s all about sharing and none of it’s about control. Caveat orator – let the speaker beware – should be our watchword. But no doubt, like the IT fashions and fancies that have gone before, it will require a lot of fingers to be burned before any security, let alone DRM gets added. So in the meantime, if you don’t want your information being shared about, get some DRM protecting what you have got, whilst you still have it.]]> I am sure you will have read the article in efflux.com warning of the death of the Microsoft music DRM by August 31, and Google by September 30 (2008). Their conclusion? “Digital rights management technologies are a failure commercially and technically. There are too many standards which are not interoperable, they restrict the customer's freedom to high degrees and they are an everyday nuisance to work with.” Well wash my mouth out with soap (no I don’t mean SOAP which is another load of XML). Let’s look at that rather broad conclusion. There are too many standards? Excuse me? I looked up DRM standards in ISO (International Standards Organization, the people who figure out what is the standard for electric cable so you don’t blow your hands off touching the stuff, the rails that trains run on so that they stay running on them, and serious computer standards – you know, stuff that matters), and drew a blank. Now if someone had said manufacturer’s standards, I could buy into the argument. An everyday nuisance to work with? Well, you might get paid to listen to music tracks. Or maybe they just mean you can’t readily copy tracks and give them away? (I bet you can buy a license to do that, but its costly.) But what most people have yet to understand is that the DRM industry is seriously new, and that there is precisely no desire by major players (tape, CD, DVD, PDF and so on) to provide interoperable standards, when we haven’t even agreed what standards are being looked for, and why, or how they should be implemented. The current market is in what the economists call the ‘prime mover’ phase. Literally, that means that the first into the market can do what they like, set any standards they feel like, and charge any price that suits them! The market has been there since the late 1990s. Not quite recently? And mainly because it has not suited any major player to see International Standards emerge, it has not changed. Actually, rather than worry about the latest squabble about whose industrial standard should be dominant, we should be worrying about what sorts of DRM standards are going to emerge, and how to be able to influence them. They are rather like taxation – you can never stop a government from taxing you. That’s how they stay alive, by taking your money (and if the statistics are correct they get more money out of individuals than they get out of corporations, which ought to be rather worrying). Despite what the pundits say, DRM is not going to go away. The people who create and sell intellectual property (IPR) have to make their livings from doing that. They are not going to give away their livelihoods any more than you or the pirates are going to. The people you really need to fear are those who do not need a day job to pay their way, or where the day job is so badly supervised that they can afford to waste their employer’s time whilst they pursue illegal interests. And just as the Internet has moved from being a free information source (1995-2002) it is now increasingly a paid information source and what was previously free is no longer available unless you pay for it. If you examine information sources now they consist of sources that have been paid for as a matter of public interest (the Gutenberg series that have made much of the Classical repertoire (Shakespeare, Plato and many other Classical greats such as Chaucer) accessible to any and all. We applaud these measures. Even if modern schooling fails to inculcate any appreciation for works other than Homer Simpson, we agree that works out of copyright should be available to all – but MUST be assured as being the actual words, and not some Bowdlerism (Thomas Bowdler (July 11, 1754 – February 24, 1825) was an English physician who published an expurgated edition of William Shakespeare's work that he considered to be more appropriate for women and children than the original, and, according to some sources, actually made it more accessible!). Apparently political correctness is not a new disease? Seekers after truth deserve some measured verity. But you can’t deliver that without DRM. Because DRM is, as the Germans say, “A sword with no handles.” It does not merely control what a recipient can do with the information they receive, it verifies that the information they have gained possession of is truly coming from the claimed source – the publisher. There is nothing negative in this for the publisher. Actually, for the publisher it is a comfort. Because otherwise how else can a publisher ‘prove’ what it is they actually published, given the modern world where information theft and the misuse of information are commonplace? The fact of DRM tools provides an abiding proof that they truly are the source of specific information, and that can help publishers obtain prosecution of unreasonable and irresponsible pirates on the one hand, and avoid prosecution for things they never did publish, on the other. So think carefully before trying to consign DRM to the dustbin of history. It may be following closely behind such must have’s as death and taxes.]]> I read, from the July pages of that .” Hmm. Sounds like Pirates? But how many economic analyses have you read that actually examined the effects of piracy on product markets? Probably very few. Big hitters like the music, TV and film people complain about losses in their industries. Designer label clothes and perfume producers complain. Software manufacturers also complain. But if you make a careful analysis, piracy always occurs when there are serious market pricing inequalities that are not addressed by regulation. Or to try to be a bit clearer, when you can buy a product in one country for x, and in another country for a half of x, there is a serious price inequality. If no action is taken to correct a price inequality, this creates the vacuum that pirates, being, at heart, just as serious capitalists as any industrialist, will naturally seek to fill. This is not some vague modern theory. Rather it is a statement that has stood the test of time. When in the UK in the 1800s, it was decided by government to raise the tax on alcohol and tobacco to be significantly above that of France, somewhere only 14 miles away, pirates were handed a market second only to the government forcing you buy your postal service from them (and they did do that). There are many more recent examples. Designer jeans manufacturers and perfume manufacturers won global legal battles that enabled them to enforce per country pricing for their products, and to be able to prevent countries purchasing at lower prices from reselling to other, higher priced countries. The same has been true in the information world. I cannot understand how it is that the same information can possibly have a different value in a different country. I agree, that, if translation into a different language from the source is necessary, then that might introduce a cost that has to be paid for, but I also assume that the seller will have thought about the effects of price on market before launching their wares, and will have worried about what price to set in order to make a viable return (please see economists for the math behind price/market/demand models, I don’t have the desire to write a book on it). To give you a practical example, the other day I paid over $350 for a book that it was essential I could read right now, was out of print, and only one supplier could deliver a copy next day. The bottom line, as they say, is that any DRM controls built by man can be removed by man. It all depends on cost/effort/desire. Risk analysis. DRM controls over things that can be seen or heard can be ‘removed’ using a camera and a microphone. That cannot be prevented. There will be loss of quality, and, if your product has high embedded functionality (ability to search on information, links to other objects, embedded information) there may be critical loss of functionality that renders a copy of little or no value in the market place. You may also be using features such as dynamic watermarking, that significantly reduce the desire of legitimate users to allow pirates access to their materials because they may be personally identified as the source of piracy. Hopefully, in your DRM system, you are using features such as encryption, that can prevent trivial ways of accessing and copying your work(s). But do please always remember that to a greater or lesser extent, if the cost and effort are low enough, and the desire is high enough, then copies of the basic information can be made. What can be stopped is stealing embedded functionality that the DRM controls also protect. The greatest effector you can face is desire. If your market is students and your strategy is to charge premium price then you can expect a high ‘desire’ to break your controls. If you choose a lower price because you can sell year on year the same product, then you lower desire. We sell our products at exactly the same price globally. There are no exceptions. There are no premiums. There are no discounts. At one level it’s a rough deal because poor economies have to pay ‘relatively’ more. But there is a level playing field. There is no market price fixing. We do not demand a different price in Chile from Canada, or in the United States from the United Kingdom. And maybe that’s part of the equation? Part of the equation of persuading people it’s not worth the bother of pirating information is to set prices that are both globally consistent and do not overly increase the desire to find a way to bypass them. That’s not part of the choice of a DRM product supplier, although you might prefer to choose one that is realistic about what can be achieved and clear about being a DRM provider and nothing else.]]> As can be the case, we have been rather overtaken by recent events – in my case an office move – actually only about a mile as the (insert the feathered creature of your preference) flies. Fortunately our office move really did happen over a weekend. And whilst it had precisely zero impact on our customers – everything carried on running seamlessly – that was not entirely the case for us staff. Take me (please), for instance. For reasons that are so obvious I am not going to explain them, my shaver cord happened to be on my desk when the removers came. That is the last time I, or anyone else saw it. So the emerging beard can be explained by the fact that I am too tight to buy a new Braun top of the range machine without a fight! And what has that got to do with information security? Well, during last week the UK government published a whole series of reports (on the same day, so you know they had saved up all the bad news for a moment when they hoped nobody was watching) on how major government departments like the Inland Revenue (HMRC in the UK and IRS in the US) and the Ministry of Defense had managed to lose millions of people’s personal data and that none of it was protected in any way, shape or form. The major thrust of the reports was that management did not consider that the personal data they held in trust was their responsibility to protect and therefore they did not see any need to spend any money at all on protecting it. A second, and perhaps even more dangerous revelation was that government collected information that it subsequently used for purposes that were not consistent with what it had been collected it for. It is, of course, now normal that neither government ministers nor civil service officials will be exposed as charlatans or hypocrites, and that nobody will have their careers ended because they clearly failed to live up to the standards (moral, ethical, documented or expected) that they pretended were in force. Governments and their officials should not pretend surprise when the electorate ignore them – if nobody is accountable then who cares who gets elected? Well, with guidance like that from on high, what hope is there for the rest of us? Usually we look to governments and big industry to show the way in both corporate and civil behaviour. After all, they make the law, and they enforce that law. But right now their governance is, to put it mildly, severely lacking. If, to quote a very famous film, “Frankly, my dear, I don’t give a damn,” then why should we believe them when they talk about Copyright and similar protections? It looks like a load of irrelevance, and anyone wanting information protection will have to go with what they can get. There’s no point in waiting for the prognostications of the governments, or the divinations of standards bodies (international or industry led) because it is totally clear that the leaders are not interested. And that brings me back to the power cord. Nobody at the removers is interested in my problem. So I am going to have to sort it out for myself – and the beard itches!]]> There are many good and valid reasons why people wish to be able to publish and transmit information electronically. The rise of social web sites testifies to the willingness of people to make information available to selected individuals or groups. However, a willingness to share should not be interpreted as a wholesale license to all and sundry to freely make copies of electronic information and re-distribute it without even acknowledging the original ownership, or paying a Copyright fee. Much has been made, in the technology world, of the concepts of Open Source, CopyLeft, Limited Rights and so on, as arguments for the desirability of making all electronically held information freely available to all comers. In the music receiving industry (as against the music publishing industry) there has been considerable pressure to make copying and redistribution a ‘right,’ merely because it is difficult to prevent. Such arguments are intellectually barren. They are like saying that because you own a gun you have a ‘right’ to shoot anything you like because it is difficult to stop you, or that because you can buy a car you can drive it anywhere you like and in any manner that you wish. Because people suffer physical harm as a result of such poor behaviours we pass laws to take action if behaviour is unacceptable. Since Copyright is an economic right, we also have laws to protect Copyright owners from economic harm where behaviour is unacceptable. So we must reject the claim that merely because you can do something then you automatically have the right to do it. (This was never true in Roman Law countries anyway.) Empirical evidence suggests that computer users cannot be trusted to use information provided electronically only and solely for lawful purposes. Regrettably, it is essential to instil enlightened self-interest into the users of electronically provided information. This is essential, simply because, “Computers are all about copying.” (Prof. JAL Sterling and S Mathews in a paper on protocols and interfaces). Not only do computers facilitate copying, but they ensure that such copies are always perfect in every detail. This creates significant problems whose Intellectual Property(IP) (whether being offered for sale or being made available for some purpose as the result of a commercial agreement or a ruling of a competent Court, or some other reason) is being distributed electronically. Agreements for the control of the use of IP are normally very precise, and set out to prevent unauthorized use. Given the propensity of users to do things because they can, it is essential to provide a system of controls (Digital Rights Management or DRM) that actively ensure that the permitted uses must be observed, and cannot be trivially ignored. Similarly, those wishing to make their livings from selling their intellectual capacity (creating works by thinking) as opposed to their physical capacity (making objects) must have the ability to enforce their economic right. To suggest that all the world (tout le monde) cannot make their livings from publishing and selling on the Internet would be a travesty of the so-called knowledge-based economies. Whilst some may seek to demonstrate their intellectual capacities by ‘giving away’ the fruits of their labours as marketing in the hope of selling something else, not everyone has the luxury of such an indulgence. One cannot see authors such as JK Rowling agreeing, seems to be an indication of her view about copying! Academics (who, incidentally are paid to write, and often write as a matter of advertising, as do I) may claim that all intellectual capacity should be available for their study, comment, parody and so on. I wonder how many electronic copies of Deathly Hallows were provided free gratis to the academic community for that purpose? And without any limitation as to copying and being able to pass on? The fact of the matter is that there are cogent economic and political reasons why DRM technologies are required to protect documents in electronic form, and it is wrong to suggest that that there should be no mechanisms enforcing the control of authorized use made available and put in place. Let those who wish to give away their work do so if they will, but allow those who sell their work to obtain fair and relevant recompense for their labours.]]> Those familiar with reading our columns know that we are not normally exercised by minor debates. But we are amazed by the report (which we rely upon as being true) by ZDNet at (correct at the time of publication) that the United States has mandated Federal identity controls on the back of a Bill ostensibly to do with "Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Tsunami Relief, 2005." Well, OK, we do have the text in there about the ‘Global War on Terror’ so I guess that this might be a fine argument. But come on – get real – if you read the text it’s all about making the driver’s license the authorized Federally approved identity document! Now I realize that in the USA if you do not have a driver’s license then you are probably some kind of low life that does not exist, so therefore you are of no significance, and obviously not a terrorist, and therefore we do not need to worry about you. After all, a terrorist could not possibly drive a car without a license just to deliver a bomb, could they? Hey, but that’s only a social analysis that says that the downtrodden underclasses are not likely to be terrorists, so we don’t need to worry about them. Let’s try thinking out of the box for a moment. Why would you want this legislation when the people you are trying to catch are running outside of the system in the first place? After all, history teaches us that criminals are generally outside the system because being inside the system is a highly negative advantage! Think for a moment about the costs that are going to be involved in granting the Secretary of Homeland Security this (wild?) desire to collect, at the expense of the States, and therefore at the direct expense of the Citizen, without there being any demonstrable benefit from this to that citizen (unless you count the collection of information about law-abiding people, because no self-respecting terrorist of any kind would be caught by such a an obvious trap) provable personal identifiers to standards that have not been promulgated and security controls that have not yet been identified. Yes, this bill will be the mother of all pork barrels to industries who strut their stuff about all manner of personal identification methods (despite the simple fact that there are no recognized international standards in this field, so anyone can claim anything they like – snake oil?) and the bill completely fails to say anything of any substance about standards, compliance, certification, accreditation and so on. So we screw the US car driver for shedloads of money – is that a problem? Well, if the punter is willing to pay, then I guess not. After all, that’s what a capitalist economy is all about. Capitalism is about what industry can screw out of consumers and governments can screw out of both (not maybe what Carl Marx actually wrote, but close enough). So this may actually come down to cost. If the cost of implementing legislation that has not had public debate, is of questionable public benefit, has a cost that may not be appropriate to the actual benefit achieved (Since 9/11 precisely what has happened? Well at the current rate of experience, nothing.) then why are we bothering? Surely governments are being held to account for spending taxpayer money wisely and effectively. But, of course, if this is really a bill to hand the Secretary a future blank check for getting funding, even if that office has so far failed to do anything tangible for the protection of the public, then it is completely understandable. But then, you have to accept that by insisting on the registration of the law abiding (at an unknown, but certainly high cost to them personally) you can catch terrorists, who are, by definition, not law-abiding, and therefore outside of the system, then this is something you should go for. But please suspend the use of logic, finance, governance or anything similar. On the other hand, if you feel like funding another Star Wars program personally, then step in line and vote.]]> If you believed everything you heard from the music and film industries, you could get very confused very easily. And you could all be fooled into thinking that they are the only industries that exist when it comes to needing DRM (it’s a bit like saying that governments are the only organizations that need privacy!). So Amazon and Sony now say DRM is dead? Or is this nothing more than the usual marketing hype after a disaster? Actually, music and film have had long, chequered and cyclical love/hate relationship with DRM, mainly because they were the first people to get seriously exposed to piracy, and to feel pain where it truly hurts – in the bank account. They got there in the 1960’s, long before the PC, or the iPod were more than the work of science fiction writers like Arthur C Clarke. The fact that it would take almost 50 years before book publishers and organizations woke up and smelt the coffee is just history. Their first outing was the cassette tape. Up to then, copying a record was practically impossible, and although reel-to-reel recorders were around they were too expensive and difficult to use for normal mortals. But anyone could use a cassette, and everyone did. They tried suing people who sold cassette decks that would copy from one to another, but they failed, and thus started the hunt for the Holy Grail of the industry – a way of stopping people from copying music (and later film) so that they could charge premium rates for their wares. Alas, it was not to be. They lost the cassette wars, and later the VHS wars. They won the DAT war (Digital Audio Tape) – remember it? maybe not, when the hardware manufacturers lost a fortune because no consumer would buy into it because the copying controls were so aggressive you couldn’t necessarily copy your own personal recordings. CD controls proved to be a lost cause, and DVD region pricing was outflanked by hardware manufacturers who were determined not to lose out on their own market opportunity this time around. I remember a big meeting in LA when the music industry said they were going to implement secure MP3! Laugh like a drain? Well, I must confess I was close to being thrown out of the meeting. Lately it seems that Sony decided on a rootkit approach to preventing copying that also opened up a serious hacking opportunity. So maybe you should see the current position to be nothing more than posturing by an industry that wants to be the content deliverer to the mass consumer market, and has only one objective in mind – its own profitability. That’s not to say that there’s anything bad about profit. All of us are in business (in a particular sense), and if we don’t show a profit, one way or another, then things are very bad indeed (see Charles Dickens on Wilkins Micawber). And what most people (and organizations, for that matter) trade on is their intellectual property. In the ‘knowledge economy’ that is what we are selling. So we have to safeguard that IP or anyone and everyone (I used to say the World and His Wife, but apparently that is no longer Politically Correct, and saying it would render me liable to being excommunicated by the deity or politician of your choice) can rip (RIP – Requiescat In Pace or ‘you will rest in peace’) you off and make your personal trade value slightly lower than that of a Eunuch’s scrotum. (Apparently that is still Politically Correct!) What one has to think about very carefully, is what value DRM controls provide to you (either as an individual or as a business) in order to say if they matter, and when it is that they matter. The music/film industries are under the cosh because you can either copy their stuff, or you can’t. If you can buy a legal version, and then copy it by recording the sound or filming the picture on a seriously high quality screen then you are in business. And they are not. But that is a problem for those industries. You can buy cracked codes for satellite TV systems with little or no effort or risk. Try eBay – if it’s too cheap can it be real? But, for the document handling industries, things are only just beginning to warm up. Computing is all about copying. When you read this you are looking at a copy of what I wrote, in fact I am doing that when I watch the characters come up on screen in front of me. And that’s the major hazard for the PC and the Internet. Everything has been set up to promote copying. And the software does not give a ‘tuppeny damn’ whether the information is to be public, be controlled, or kept totally secret. Which is bad news if you are in the business of selling, or simply providing, information (your knowledge, expertise, capability, private advice to clients, tax return computations, price lists, sales manuals, repair manuals ….. the list seems endless). So enter DRM. DRM is the only approach available that lets you specify not only who can see your proprietary information, but what use they can make of it. The film and music industries were interested in preventing copying, but you will likely need to be able to stop people using information after a particular date, or make sure they can only see it for a couple of times, or print it once, or any number of other things. Encryption does none of these things, it just prevents the un-anointed from being able to access information, not limit how and when they might use it. So whilst the music and film businesses figure they need to flex their market in order to maximize their own profits, they are operating in a different environment. Commerce, industry, publishers, authors and ordinary mortals need DRM technology to protect their personal and commercial interests. They have nothing else. They are exposed to commercial or personal ruin by the uninhibited ability of the PC and the Internet to copy and re-distribute their information without any control at all. So whilst the record industries declare DRM is dead, we say, “Long live DRM.”]]>
http://feeds.feedburner.com/DigitalRightsManagementIprAndCopyControl
crawl-002
refinedweb
7,269
58.21
At 11:26 PM 9/12/2006 +0100, Paul Moore wrote: ... To be honest, I've completely forgotten at this point. You could skim the "svn log" looking for things related to Python 2.5 if it really matters. :) I just remember that I had to make changes. Some were minor things, like Python 2.5 distutils including its own "upload" command, and others were more substantial, possibly related to other distutils changes or to language changes. >? Nope. There are only two things that the SVEM format can't do (besides being uninstalled without a packaging system): 1. It can't include __init__.py for "namespace packages" (packages like peak.* and zope.* that are split across separate distributions). You can have namespace packages, mind you, it's just that you can't include anything in the __init__.py for the package, because it's not installed when a SVEM installation is done. 2. Project-level data files, i.e., resources in the "root" of an egg. There's no way for you to access these resources if a project is installed SVEM, because there's no per-project directory for that stuff to go in. These are both extremely rare things to have in a project. >Fair enough. Presumably, as automatic building from source should have >worked, this doesn't imply a higher dependency on distributors >providing binary eggs than the old approach did. Correct.
https://mail.python.org/pipermail/distutils-sig/2006-September/006681.html
CC-MAIN-2016-36
refinedweb
235
76.22
J Tutorials as popular as Struts. If you need to brush up your knowledge on JUnit I can recommend... Jakarta Struts". Here I showed you: 1. how to download and install Struts 2... Struts Tutorials Struts Tutorials - Jakarta Struts project project should i create 1. Detailed study of all modules and submission of SRS (Software Requirement Specification) 2. Presentation of SRS document please can u...project i have to do my final year project project topic:Weekly java project hi this amol i wish to do a project on banking software in java can u help how can i help u .. and what the kind of help u need u can send mail to me if u want adamlee2010@yahoo.com Mohamed Struts Tutorial: Struts 2 Tutorial for Web application development, Jakarta Struts Tutorial Struts 1 migrate your project to Struts 2. You can still find Struts 1 tutorial... Struts 2 Tutorials - Jakarta Struts Tutorial Learn Struts 2... framework you can actually start learning the concepts of Struts framework Struts CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM CAN U HELP ME TO CODE IN JSP FOR ONLINE VOTING SYSTEM can u help me to code in jsp for online voting system Can you give me some good factory pattern examples? Can you give me some good factory pattern examples? HI, I am looking for some factory pattern examples to learn about them, if you can point me towards some of the examples that would we very helpful. Thanks Hello project result from some site and store it in our data base so that we can perform...project hi , we r working on one project related to our results... one , we r also planning to do pi-charts or bar charts for this, can any Software Project Management Software Project Management This section contains details and key factors of Software Project Management. First you need to understand about management... to achieve certain objectives/ goals. What is Software Project Management ? Software... ASAP. These Struts Project will help you jump the hurdle of learning complex Struts Technology. Struts Project highlights: Struts Project to make Open Source Frameworks another software project can be organized and developed. A framework may include... as various Jakarta Commons packages. Struts encourages application architectures based.... Now I seriously doubt that I can come up with a list this long for open source Can i insert image into struts text field Can i insert image into struts text field please tell me can i insert image into text field What is Software Project Management? the software engineers and software programmers can demand of the project... a look at some aspects that a software project normally differ from others... that can be deciding for the overall project. Crucial aspects of Software Tell me - Struts Struts tutorial to learn from beginning Tell me, how can i learn the struts from beginning Need the some guidence - Spring ; Hi pavan, I show your posted question but not not clear so can...Need the some guidence Hi, MY name is pavan iam having mini... project finder description: Develop s/w to identify identical projects project guidance - JSP-Servlet form, can anyone guide me through the project ? Hi maverick Here is the some free project available on our site u can visit and find the solution...project guidance i have to make a project on resume Can someone help me with this? Can someone help me with this? I have this project and i dont know how to do it. Can someone help me? please? Write a java class named "PAMA..., Multiply, Divide) Help me please! Thanks in advance! import supermarket Automation software supermarket Automation software Hello RoseIndia , I want to build a supermarket Automation software that has the features of stock taking and receipt prniting.i want information about supermarket Automation software,can u help Software Project Management Software Project Management Hi, What is Software Project Management? How you can handle the development of software project effectively? Thanks Hi, The Software Project Management is process of planning, executing project with myeclipse - Struts project with myeclipse Hi, i want to develop a simple web application using struts and jsp in MyEclipse 5.5.1A. Pleas help me Id java project - Java Beginners java project Can u pls suggest me any ideas for my B.E project... It should not be completely database oriented... I am not allowed to develop any games that i mostly did... I can develop a useful website... Pls suggest me some Easy Struts ; The Easy Struts project provides plug-ins for the Eclipse... by the Jakarta Struts framework. Easy Struts is FREE and OPEN-SOURCE... : Easy Struts plugin can be used with J2EE plugin : Sysdeo requeting project requeting project project sir sir please help me... with open source software. sir please i know only html languge please help....... within this week i need to complete this project after developing this please mail Java Project - Java Beginners prepare a project on Java. Can u plz suggest me some topics ? Hi friend, Login Application is the best topic. Hi Divya, i am sending Java project topic with more information. 1. Asset Management 2. Shopping Reply Me - Java Beginners class (which contain your database information). Check previous I send U two...Reply Me Hi Rajnikant, I know MVC Architecture but how can use this i don't know... please tell me what is the use Reply Me - Java Beginners project deadline is very near this code i am using then i got the some error......... Hi Ragini, I can understand your problem. Today once again try. can you send all file in make zip file on my id. Because your code very Error - Struts . If you can please send me a small Struts application developed using eclips. My...Error Hi, I downloaded the roseindia first struts example... these two files. Do I have to do any more changes in the project? Please hibernate software hibernate software please give me the link from where i can freely download the hibernate software Hello - Struts Hello Hi, Can u tell me what is hard code please send example of hard code... Hi Friend ! Hard coding refers to the software... user to change the detail by some means outside the program. for ex. class plz send immediately - Java Beginners plz send immediately Hi Deepak, How are you, Deepak I face some problem in my project,plz help me My Questin is user input... help me Hi ragni, Read for more software making of the software you want to make. Then I will guide you the necessary steps. You can...software making hello sir , sir you post me answer its difficult to make software in c and c++. but sir i want to make software software making software making software hello sir sir i have learned... into software. and how we give the graphical structure.suppos i want to make...++. please you give me guidness about making software. sir you tell me its. software making software making hello sir you asked me language . sir i want to make software in c and c++ . which language i know... will this coding convert into software. sir this type of quation my mind. i Struts Struts in struts I want two struts.xml files. Where u can specify that xml files location and which tag u specified saritha project - Struts (/ProjectOnline) is not available. Apache Tomcat/5.5.28 i have done servlet-mapping in web.xml then also it is not coming plzzzzz give me particular answer or example plz regarding project regarding project sir we need help about project. we have idea about his project please guide me sir. OBJECTIVES OF THIS PROJECT: -Ability... to collaborate and come up with combined solution quickly. PROJECT NAME: DID I WRITE A GOOD Project Guidance Project Guidance Hello, I have a project in SE at college and me... not looking for any piece of code or such...but I was wondering if you could give me some advise on how I should approach the development of this app. What Struts - Struts Struts hi can anyone tell me how can i implement session tracking... one otherwise returns existing one.then u can put any object value in session.... you can do like this.... session.setAttribute("",); //finally u can remove Submit project to get developed ? How do I Submit my Project ? You can... submit ? You can submit almost any software project related... my Project ? Who can submit a project ? What type Open Source Software for software, using quantitive measures. Some sites provide a few anecdotes on why you.... This project will bring together organizations, software developers, and software... and development of open source software. To help improve the current situation I
http://www.roseindia.net/tutorialhelp/comment/8975
CC-MAIN-2015-40
refinedweb
1,460
75.4
Haskell: Processing program arguments My Prismatic news feed recently threw up an interesting tutorial titled ‘Haskell the Hard Way’ which has an excellent and easy to understand section showing how to do IO in Haskell. About half way down the page there’s an exercise to write a program which sums all its arguments which I thought I’d have a go at. We need to use the System.getArgs function to get the arguments passed to the program. It has the following signature: > :t getArgs getArgs :: IO [String] Using that inside a ‘do’ block means that we can get the list of arguments as a List of String values. We then need to work out how to convert the list of strings into a list of integers so that we can add them together. The way I’ve done that before is with the read function: > map (\x -> read x :: Int) ["1", "2"] [1,2] That works fine as long as we assume that only numeric values will be passed as arguments but if not then we can end up with an exception: > map (\x -> read x :: Int) ["1", "2", "blah"] [1,2,*** Exception: Prelude.read: no parse I wanted to try and avoid throwing an exception like that but instead add up any numbers which were provided and ignore everything else. The type signature of the function to process the inputs therefore needed to be: [String] -> [Maybe Int] With a bit of help from a Stack Overflow post I ended up with the following code: import Data.Maybe intify :: [String] -> [Maybe Int] intify = map maybeRead2 maybeRead2 :: String -> Maybe Int maybeRead2 = fmap fst . listToMaybe . reads reads has the following signature: > :t reads reads :: Read a => ReadS a which initially threw me off as I had no idea what ‘ReadS’ was. It’s actually a synonym: type ReadS a = String -> [(a,String)] (defined in ./Text/ParserCombinators/ReadP.hs) In our case I thought it’d do something like this: > reads "1" [(1, "1")] But defining the following in a file: a :: [(Int, String)] a = reads "1" suggests that the string version gets lost: > a [(1,"")] I’m not sure I totally understand how that works! Ether way, we then take the list of tuples and convert it into a Maybe using listToMaybe. So if we’d just parsed “1” we might end up with this: > listToMaybe [(1, "")] Just (1,"") I only have a basic understanding of fmap yet because I’m not up to that chapter in ‘Learn Me A Haskell’. As far as I know it’s used to apply a function to a value inside a container type object, i.e. the Maybe in this case, and then return the container type with its new value In our case we start with ‘Just(1, “”)’ and we want to get to ‘Just 1’ so we use the ‘fst’ function to help us do that. The main method of the program reads like this to wire it all together: main = do args <- getArgs print (sum $ map (fromMaybe 0) $ (intify args)) I’m using fromMaybe to set a default value of 0 for any ‘Nothing’ values in the collection. I compile the code like this: > ghc --make learn_haskell_hard_way [1 of 1] Compiling Main ( learn_haskell_hard_way.hs, learn_haskell_hard_way.o ) Linking learn_haskell_hard_way ... And then run it: >./learn_haskell_hard_way 1 2 3 4 10 > ./learn_haskell_hard_way 1 2 3 4 mark says hello 5 15
https://markhneedham.com/blog/2012/04/08/haskell-processing-program-arguments/
CC-MAIN-2020-24
refinedweb
568
72.8
Note:. Speaking of XSLT, that stylesheet standard provides for the importation of subsidiary stylesheets, which can contain generic templates written by others. The name of a template can be qualified to a namespace, again avoiding clashes. In other words, my stylesheet can call a named template that has a distinctive name qualified by a namespace (which has been chosen by the template's author). I could even use more than one library of templates imported into my stylesheet, and different namespaces for each library would avoid duplicate names of the templates. Many recommended standards of the World Wide Web Consortium (W3C) promote namespaces for modularity.. Brief introduction to XML namespaces The formal designation of a namespace is a URI. Generally, you'll see URLs (one form of URI) as the identifier. Because URIs use a wide range of characters, there would be a severe impact on the XML syntax if we had to attach the full URI directly to every qualified name. Therefore, the XML Namespaces Recommendation also defines prefixes that are directly attached to names. Syntactically, you use quotes (single or double) around the URI string, and a colon to set off the prefix; other characters present no interference. The prefix is a standard XML name. You can avoid using a prefix by assigning one URI to all unprefixed names or by laboriously (and dangerously) reassigning the default namespace wherever needed in the document. For practical purposes, prefixes are required when you intermix vocabularies. Like other specifications for XML, the XML Namespaces Recommendation is published by the W3C. The W3C is developing version 1.1 of the Namespaces Recommendation, where the formal designation will be an Internationalized Resource Identifier, or IRI. The differences between URIs and IRIs lie in how certain characters are escaped to make them benign. Let's look at some real namespace syntax: This is not a mere custodian element; it is a custodian element in the vocabulary identified by the URI. The prefix mddl is used to associate the element name with that URI. The URI is in the mddl.org domain; mddl.org is the organization that maintains the Market Data Definition Language, an XML vocabulary in which custodian is one of many elements. (This vocabulary defines elements pertaining to investments and portfolio management.) Notice that mddl.org has made provisions to define other vocabularies and to issue later versions of the MDDL vocabulary by having several fields in their URI. The local part of the name is the name within a particular vocabulary. For names that are not qualified by a namespace, the local part is the only part that exists. For a prefixed name, the local part is what comes after the colon. For example, elements named book:title and book:isbn are in the same namespace but have different local parts. Elements named book:title and person:title have the same local part but are entirely unrelated because they belong to different namespaces. The prefix, used in qualified names Prefixes simplify discussion of your work. You can discuss book:title and xsl:apply-templates and the like while you develop an XML-based system, and only occasionally approach the details of their respective namespaces. In some technical sense, the prefix doesn't matter because it's a transient abbreviation that associates names with a namespace URI. However, it's a best practice to establish logical and consistent prefix names to boost developer productivity. The prefix qualifies and associates names of elements and attributes, and also applies to keyword-type text strings in some situations. For example, book:title is equivalent to "title as a characteristic of a book" when read in an XML document, which is convenient when a person has to scan some XML. By referring to the place where the prefix book is tied to a URI, one can find a more formal specification that states, for example, "title as defined in the book vocabulary issued by abaa.org in 1999." Several W3C recommendations use the term QName to refer to an XML name that may (or may not) be qualified to a namespace, and if you read specifications regularly, you will even occasionally see "QName-but-not-NCName" to indicate an XML name that must be qualified to a namespace. (The term NCName refers to an XML name without a colon. NC means "no colon." ) For example, named templates in XSLT can be named with QNames rather than with simple XML names, facilitating the publication of a library of templates that are all named in a particular namespace. A QName uses the colon (:) as a special character to separate the prefix from the local part. Naturally, the prefix and local part cannot contain a colon, but they otherwise follow the prescribed syntax for XML names. More than one prefix can be associated with a particular URI. XML standards will generally force resolution of prefixes to their associated URIs, so that names are the same if their local parts and URIs match, even if the prefixes differ. A prefix can only be associated with one URI at a time. Every article about XML namespaces has to point out that the URI goes nowhere, meaning there is no need to fetch any material that the URI appears to identify. Indeed, there is no requirement to set up a server for the identified location or to have fetchable material at the location. The XML Namespaces Recommendation only requires string-matching to establish that two URIs are the same, though it does briefly mention that the namespace value is a URI reference and implies that this value should follow the syntax of RFC 2396 of the Internet Engineering Task Force. The URIs issued by the W3C always use the http: protocol and the w3.org domain name, so use of HTTP URLs can be considered the safe approach, and thus a best practice. The domain name is the key to avoiding clashing names. By using the worldwide Domain Name System, the namespace URI provides an answer to the "Says who?" question. If you have a domain name, you have a piece of the world where you control the names, and this applies to your XML namespaces as well as your servers. For example, mddl.org is the domain name belonging to an organization that defines XML vocabularies pertaining to investments, and nobody else can assign names and URLs under the mddl.org domain. In the future, the W3C may establish a guiding principle for the namespace URI to point to a fetchable resource. Various W3C committees are discussing alternatives. Most likely, the material identified by the URI will itself be an indirect pointer to an actual schema or description, allowing the syntax of the real description to evolve over time. For now, the URIs used for W3C namespaces point to simple text pages stating that the URI is a namespace. Try this as an example:. The W3C document that defines the namespace syntax and function uses the term namespace name to refer to the URI of the namespace. The XML Information Set Recommendation, which defines the meaningful parts of an XML document, also uses the term in the same way. However, the XPath functions name() and local-name() return the prefix when applied to a namespace node. Therefore, it is a best practice to either avoid the term namespace name or only use it in a context where it's clear what you mean. XQuery uses the terms namespace prefix and namespace URI when discussing its syntax. The latter can safely be used to refer to the URI in a namespace declaration. Namespaces already in use Namespaces are designated for the various XML vocabularies, whether issued by the W3C itself like MathML or by an industry consortium like DSML, which came from an OASIS Technical Committee. (See Resources.) You can do the same within your organization. Furthermore, other W3C recommendations in the XML family use namespaces to distinguish what they define. The XML Recommendation defines no element names, but describes two attributes that can be used in XML documents, xml:space and xml:lang. The XML Base Recommendation adds xml:base to the list. In each case, use of the xml prefix means that they are in a namespace that is defined by default for every XML document. In a recent Erratum, the W3C declared that you cannot use any prefix besides xml on the names built into XML. The W3C uses a unique prefix for each vocabulary it defines. Each recommendation takes pains to point out that these prefixes are not functionally special, just used consistently. Returning to the MathML example, the MathML 2.0 Recommendation suggests the outer element <mml:math xmlns:, where mml is their favored prefix. Again, the mml string is not special, other than for humans reading the documentation. The URI () is the string that actually identifies the MathML vocabulary. (In that vocabulary, an element with the local part name math is the outer element.) The XML Inclusions Recommendation presented a design dilemma: The inclusion construct couldn't be reduced to a single string value, as could xml:base and xml:lang. An inclusion declaration may need as many as three parts: - the href of the included resource, - its presumed encoding, - and its parsing method. These could be joined as attributes on an element, but naming that element include or xml:include would impinge on the set of available element names in XML, causing messy exceptions for humans and machines alike. The solution was to define a namespace just for this one element. A typical include element looks like this: In this example, notice that the element carries the declaration of the xi prefix inside its own start tag. If one file has several XML inclusions, you may want to declare xi at the top, in which case each element is still named xi:include, but doesn't carry the namespace declaration inside its start tag: Having a namespace for the generalized XML include also keeps it separate from the application-specific includes in XSLT and XML Schema. XSLT and XML Schema are two cases in which an elaborate recommendation requires a full document to describe the transformation or data design, respectively. These documents are known as XSLT stylesheets and schema definitions, respectively. Following one of the basic design principles of XML, these are XML documents that use specially-namespaced vocabularies. In fact, XML Schema defines one vocabulary for the schema definition document, and another vocabulary for schema items that occur in the instance documents or data defined by the schema. Schema definitions and XSLT stylesheets may intermix the elements and attributes of their language with elements and attributes from other namespaces, so prefixes are needed. It is a best practice to use the prefixes that the W3C uses. For example, the XSLT Recommendation and most books about XSLT use the prefix xsl to identify the elements of the XSLT vocabulary. If you stick with the xsl prefix for your stylesheets, you can then discuss your deployment plans and consult XSLT books without the mental overhead of translating prefixes. Establishing qualified names in your XML XML documents have a tree structure, descending from the document element or outermost element. A namespace can be declared on any element, allowing it to be recognized within the sub-tree defined by that element and all its children. The declaration resembles an attribute, but most W3C recommendations consider it to be a separate type of node. When you look at the XML, you'll see the namespace declarations inside the start tags of the relevant elements, right alongside the attributes. There are two syntax variations: xmlns:prefix="URI" xmlns="URI" The first one is commonly used; it associates a prefix with a URI. The second one declares that there is a default namespace for those elements that lack a prefix. Within the overall design of XML, both of these syntaxes fit under the reservation of names beginning with the characters xml for XML purposes. The default namespace is initialized to be no namespace-URI at all, so there is a syntax for undefining a previously-defined default namespace by assigning it to the null string. (Null strings are technically valid as URIs, but disallowed as namespace URIs.) Prefixes can be set to different URIs, but cannot be undefined, at least for XML 1.0 documents. Variations of namespace declarations In April of 2002, the XML Working Group of the W3C announced it was considering a revision of XML namespaces that would permit the assignment of a namespace prefix to the null string. In the September edition of the proposal, the usage was restricted so that such a declaration could only be used to undefine a prefix for the purposes of avoiding conflicts and eliminating unwanted namespace nodes, and a qualified name could not use the prefix at any place in the document where it was assigned to null. For now, note that the exclude-result-prefixes feature of XSLT can be used to remove unwanted namespace nodes if they aren't in use, should you need to do so. A prefix can be associated with one URI at the top of a tree, but associated with a different URI within a sub-tree by having an xmlns:prefix="new-uri" declaration in the start-tag of the element atop the sub-tree, then associated with another URI (or the original URI) in a sub-sub-tree inside the sub-tree, and so on. Doing this can cause confusion for those who have to read the raw XML document. This example is compact, but imagine how hard it would be to find all the xmlns declarations in a large document: You can apply the following preferred practices: The best practice here is to use a given prefix for only one namespace throughout all XML documents in a system. If this is impractical, at least try to associate the prefix with only one URI within a single document. Another best practice is to make all the necessary associations up in the start tag of the document element, so that they apply throughout the whole document. This makes it easier to find all the declarations. The number of namespace declarations that can appear in a single start tag is unlimited. When a software tool generates XML, it has to place namespace nodes ( xmlns declarations) within the tree so that they are in effect where needed to qualify names. If a namespace has an associated prefix, the namespace can be declared higher up than the element where it's needed. This can have the desirable effect of reducing redundant declarations. The Xalan XSLT processor is one example of a tool that does this. You must declare all prefixes before using them, except xml and xmlns, which can be assumed to be in effect and unchanging throughout all XML documents. You may be tempted to exploit the attribute-like syntax to have some of your declarations set up as default attributes in an external entity. The best practice here is to have the declarations contained within the document, thereby reducing assumptions and dependencies. Use of the default namespace (the one applicable to unprefixed element names) is a judgment call. If you can get accustomed to prefixing all element names everywhere, you avoid some pitfalls. However, some people may experience prefix fatigue or feel that one namespace applies to the real content of the document and that making it the default is a way to make that distinction. If you follow that latter path, you will need to establish some design principles for determining the namespace that can be the default in a given document. Of course, the rules will benefit only those people who actually have to read (and possibly create) XML documents. The best practice regarding use of prefixes is to either use them everywhere or to use them on all items except those that are the real content being delivered to the end user. Use prefixes for all process control elements that are modified only by system developers, including XSLT stylesheets, schema definitions, and so forth. Use prefixes on all items coming from XML vocabularies that are external to your organization, with the possible exception of real content being delivered to the end user. Attributes are a little different An attribute can appear in a different namespace than the element that contains it. For example, <movie:title xml: has an attribute that is not from the movie namespace. If an attribute name has a prefix, its name is in the namespace indicated by the prefix. However, if an attribute name has no prefix, it has no namespace. This is true even when the default namespace has been assigned. The W3C Namespaces in XML Recommendation makes that point with this example: The elements are affected by the declaration of a URI for the default namespace. That is, both x and good are associated with the URI "" because it's the default namespace. The attribute n1:a is also associated with that namespace, due to its use of the n1 prefix, which is associated with the same URI. There is no conflict that the a attribute is being declared twice, because while n1:a is in the namespace, the unprefixed a is not; the latter is not in any namespace. Since xml:lang is illustrated above, let's note that it is a best practice to use the xml:lang attribute as the way to declare that the content of the element is in a particular natural language. When a W3C vocabulary specifies both elements and attributes, it typically will not require that the attributes be qualified to the namespace as long as they occur on elements that are qualified. Returning to the XML Include example, in <xi:include xmlns:, the href and parse attributes are specified as meaningful attributes of the xi:include element, so an XML parser that is able to act upon the xi:include element must interpret those attributes as details of the include operation. In the full universe of possible attribute names, all names beginning with the letters "xml" -- in that order, but in any upper-/lower-case combination -- are reserved to be defined by the W3C. That way, a namespace declaration like xmlns="" can use the syntax of an attribute rather than a distinct syntax. Most W3C specifications call it a namespace declaration rather than an attribute, and it's a best practice to observe the difference in conversation. (The Namespaces Recommendation document itself refers to these declarations as reserved attributes long enough to introduce them. DOM Level 3 also treats these declarations as attributes from the xmlns namespace.) A namespace declaration like xmlns:fooname="" has the same syntax as an attribute with a qualified name, but the initial letters "xml" signal its special role, and it too is a namespace declaration in conversation. However, an attribute like xml:space="preserve" is still an attribute in the proper terminology, but it is in the reserved namespace. If your XML documents get processed by an application that recognizes XML but is not namespace-aware, the QNames will probably survive and the namespace declarations will be treated as attributes. The xmlns prefix has been specified by the first Namespaces in XML Recommendation to not have an associated URI. The W3C may opt to change this in the future. This may not make much of a difference in the real world, since most XML tools and processes manage namespace declarations automatically. Where they don't, you usually have a method to create or avoid creating a namespace node in the tree-like representation of the XML. When the XML resides in a file, the namespace declaration has the standard xmlns sequence in the start tag of an element, but the XML parser that reads the file will know to recognize xmlns whether or not it's associated with a namespace. (Since XML launched without namespaces, you could potentially encounter an early XML parser that is not namespace-aware; avoid such parsers if the best practices presented here are at all relevant to you.) Validation and namespaces The XML Schema Recommendation has complete provisions for defining a document structure with namespaced elements and attributes. Furthermore, it defines a special QName data type for strings that must be valid as qualified names. A schema definition document can specify the target namespace for the document structure. The older document type definition (DTD) syntax for specifying document structure is not namespace-aware. However, DTDs tolerate element and attribute names that contain colons. If you want to use DTDs and namespaces together, you can do so by designating specific prefixes and treating them as fixed parts of the element and attribute names. The technique is explained in detail in C. M. Sperberg-McQueen's memo in The Cover Pages (see Resources). Expect substantial discomfort if you must do this. (DTDs allow the assignment of values to attributes not explicitly present in the XML document. Setting an attribute named xmlns through this DTD mechanism is a bad idea.) To this point, I have covered the foundation established by the W3C. Part 2 provides more depth on the best way to establish your own XML vocabularies. In Part 2, you'll also see renaming techniques that are namespace-aware. - Participate in the discussion forum. - Delve deeper into XML namespaces and define your own XML vocabularies in David Marston's second article, "Plan to use XML namespaces, Part 2." - For another look at the subject, read Uche Ogbuji's article "Use XML namespaces with care" (developerWorks, April 2004). - Discover what XML 1.1 and Namespaces 1.1 are about, what changes they bring, and how they affect other specs and users in "XML 1.1 and Namespaces 1.1 revealed" by Arnaud Le Hors (developerWorks, May 2004). - The Namespaces in XML 1.0 Recommendation from the W3C sets the standard. - The XML Schema Recommendation of the W3C has three parts: Primer, Structures, and Datatypes. - The W3C is developing Architectural Principles about identifiers. - XInclude is on its way to becoming a W3C recommendation. - The XML Information Set Recommendation of the W3C is an abstract design that identifies the significance of the parts of an XML tree structure. - "Real-world XML Schema" offers naming ideas (developerWorks, January 2002). - Part 6 of Christina Lau's "XML and WebSphere Studio Application Developer" series provides coaching on the use of namespaces in schemas. - Find out more about MathML, a W3C vocabulary, and Directory Services Markup Language (DSML), which comes from an OASIS Technical Committee. - Read C. M. Sperberg-McQueen's memo on DTDs and namespaces in The Cover Pages. - See the latest "XPointer xmlns() Scheme" draft for a slightly different way to declare namespaces. - URI Generic Syntax, RFC 2396, gives plenty of detail on URIs. -. David Marston has worked with XML technologies since late 1998. Over his 25+ years in the computing business, he has been involved with all aspects of software development. He is a graduate of Dartmouth College and a member of the ACM. He is on the Next-Generation Web team at IBM Research. You can contact him at David_Marston@us.ibm.com.
http://www.ibm.com/developerworks/xml/library/x-nmspace.html
CC-MAIN-2017-09
refinedweb
3,870
51.99
Implement File Link Web Extensions API Status People (Reporter: Fallen, Assigned: darktrojan) Tracking (Blocks 1 bug) Thunderbird Tracking Flags (thunderbird_esr6064+ fixed, thunderbird64 fixed, thunderbird65 fixed) Details Attachments (3 attachments, 8 obsolete attachments) It would be nice if we can allow FileLink add-ons to be WebExtensions. The API is pretty plain and simple: manifest.json: > "cloudfile": { > "name": "Sekrit File Transfer", > "service_url": "", > "new_account_url": "", > "settings_url": "/content/settings.html", > "management_url": "/content/management.html" > }, API: > browser.cloudfile.onUploadFile.addListener((fileInfo, abortSignal) => { > // fileInfo = { > // id: 123, > // name: "teh file.txt", > // data: new ArrayBuffer(...) > // } > > return { url: "" + fileInfo.id }; > }); > > browser.cloudfile.onDeleteFile.addListener((fileId) => { > // ... > }); > > > browser.cloudfile.setQuota({ > uploadSizeLimit: -1, > spaceRemaining: -1, > spaceUsed: -1 > }); > > await browser.cloudfile.getQuota({ uploadSizeLimit: 123123 }); settings_url and management_url will be moz-extension:// pages that are shown in a (content) iframe. I'm not quite sure yet how to do any of the communication the current iframe does, there doesn't seem to be a comparable precedent in the browser. I guess we could use postMessage(). The following changes should be done on the filelink code: * A possibility to register/unregister filelink providers without xpcom * Move the learn more link outside of the iframe * Make iframe settings/management form validation result affect the main xul dialog's disabled states * Adjust the extraArgs code to not apply to WebExtensions (they'll use their own storage mechanism) * Make sure clicking on links in the iframe will open them in the browser, unless they are self moz-extension links. Geoff, any comments on the API, or required changes? I not very familiar with FileLink or how it works, but this seems good. I presume abortSignal is a callback to cancel the upload if necessary? And that returning from the event will resolve a promise in our code (I've not seen that before). abortSignal is and can be passed directly to fetch(), which is convenient. The calling code can then just call signal.abort() and the fetch call will abort. I just need to expose it via Cu.importGlobalProperties. browser.cloudfile.onUploadFile.addListener() should be able to return a promise, so you could also do: browser.cloudfile.onUploadFile.addListener(async ({ id, name, data } , abortSignal) => { let resp = await fetch(", { body: data, signal: abortSignal }); let rdata = await resp.json(); return { url: rdata.url } }); The returning in comment 0 will just return the value, but I think if you await a non-promise you just get the value. I'll make sure the code accepts both. The returning in this comment will return a promise resolving with the url object, given the function is async. Ah, I remembered I meant to say: if you use File instead of ArrayBuffer, you can specify the file name on the object instead of doing it separately. You might want an ArrayBuffer though, so just an idea. Some relevant discussion: <&Fallen> darktrojan: different thing, I'm looking into the wx cloudfile api and still don't have a smart idea how the wx iframe for example in the add cloudfile account dialog should tell its parent that the form is valid. Right now the parent reaches into the iframe and checks form validity, but from the perspective of a WebExtension that could be black magic. Any ideas? 1:45 PM <&Fallen> The current WebExtension APIs seem to just use global APIs to set status, but that doesn't seem right to me from a separation of concerns standpoint 1:46 PM <&Fallen> The only other option I see is using postMessage and documenting it, but then having the parent check form validity doesn't seem so bad 1:47 PM <%darktrojan> point me to some code, Fallen ? 1:49 PM <&Fallen> and line 284 1:50 PM <%darktrojan> thanks 2:00 PM <%darktrojan> and not a single filelink extension is compatible with 60 2:01 PM <&Fallen> The WeTransfer one will be :) It even works as a WebExtension. 2:01 PM <&Fallen> And box and hightail are built in 2:02 PM <%darktrojan> I only see box and it doesn't have a form 2:07 PM <&Fallen> darktrojan: 2:10 PM <%darktrojan> I wonder if that dialog should be reworked 2:11 PM <%darktrojan> I mean, if you choose Box, you get a link which says "Get a Box account" that opens a browser, and a button that says "Set up account", which opens another dialog with a webpage 2:11 PM <%darktrojan> that's confusing to begin with 2:13 PM <%darktrojan> also an API shouldn't be dependent on how our UI works 2:15 PM <&Fallen> Whatever it looks like, there needs to be something at account setup time that allows the provider to determine what settings and prerequisites are required to set up the account. Are you suggesting limiting what the UI can look like by providing just a few field options like the former options ui for legacy add-ons? 2:16 PM <&Fallen> The OAuth dialog won't fit into the new cloudfile provider account iframe, I guess that is another reason why it is separate. 2:19 PM <%darktrojan> what if we consider a provider "incomplete" until the webextension tells us otherwise, and just use the main window UI for everything except picking a provider? 2:19 PM <%darktrojan> or even for picking a provider somehow 2:20 PM <%darktrojan> you already use the preferences tab for a modify settings iframe, no? 2:29 PM <%darktrojan> fyi this link is a 404 I could imagine an enable() / disable() method that the extension could call once all information is provided, and it would be disabled by default. Then we could indeed use the currbent prefs dialog iframe for both setup and management. We'd still need a "add account" type button under the list of cloudfile providers, or a [ + ] [ - ] button set, which has a dropdown with the list of providers that can be added. Is this what you had in mind? This is a lot more than I expected and it might be difficult to justify backporting this to tb60. I do think it would be cleaner though and allow for less modal dialogs. One caveat, with a single global enable/disable, we are setting in stone that there will only be one cloudfile account per extension. If you for example have a Filelink for WebDAV provider, then you could only ever use one WebDAV share unless you duplicate the add-on with a different id. That seems unfortunate (to limit it to one). That's more or less what I had in mind. You could do multiple accounts per extension by passing around some sort of identifier. I guess how you do it depends on whether account information is stored by Thunderbird or by the extension. Ok, different approach. Also renamed onUploadFile -> onFileUploaded because it seems more consistent with naming that Firefox uses. manifest.json: > "cloudfile": { > "name": "Sekrit File Transfer", > "service_url": "", > "new_account_url": "", > "config_url": "/content/config.html" > }, API: > browser.cloudfile.onFileUploaded.addListener((account, fileInfo, abortSignal) => { > // account is a cloudfile account object > > // fileInfo = { > // id: 123, > // name: "teh file.txt", > // data: new ArrayBuffer(...) > // } > > return { url: "" + fileInfo.id }; > }); > > browser.cloudfile.onFileDeleted.addListener((account, fileId) => { > // ... > }); > > browser.cloudfile.onAccountAdded.addListener(async (account) => { > // TODO we could also consider adding manifest defaults for the upload size limit > account.uploadSizeLimit = 123; > await browser.cloudfile.update(account); > }); > > browser.cloudfile.onAccountDeleted.addListener((account) => { > // ... > }); > > // cloudfile account object: > { > id: 123, > configured: true, // set true if ready to use > > // default to manifest entry if provided > name: "Sekrit File Transfer", // TODO could make this readonly? > icons: { 32: "/icon32.png" } > > // quotas. Check if spaceRemaining/spaceUsed is really necessary. > uploadSizeLimit: -1, > spaceRemaining: -1, > spaceUsed: -1, > > // default to manifest entry if provided. > // Maybe double check if we really need a service and new account url > // with the new layout or if that can be part of the html page the > // provider controls. > serviceUrl: "https://...", > newAccountUrl: "https://...", > configUrl: "/content/config.html" > } > > (async function() { > // These will not return accounts for other extensions > var account123 = await browser.cloudfile.get(123123); > > var accounts = await browser.cloudfile.getAll(); > > account123.uploadSizeLimit = 123; > await browser.cloudfile.update(account123); > })(); There is no create/delete method for accounts, because I'd like to avoid providers creating their own accounts in the background without user action. I could be convinced to add a delete method. I'd appreciate feedback on the TODO parts above, and of course general feedback if this is a good approach. Where are account credentials stored in this scenario? I think the best place for them is in Thunderbird's password manager, although we'd have to be sure that we only hand them out to the right extension..) I don't think there's a need for service_url or new_account_url, or onAccountAdded/onAccountDeleted events. If Thunderbird's storing account credentials all the extension needs to do is communicate with the service's APIs. (In reply to Geoff Lankow (:darktrojan) from comment #8) > Where are account credentials stored in this scenario? I think the best > place for them is in Thunderbird's password manager, although we'd have to > be sure that we only hand them out to the right extension. I think this should be up to the WebExtension, they can use the logins API if it ever lands (bug 1324919). For now the password would likely be in browser.storage. > > >.) Yes, this is also what I had in mind. > > I don't think there's a need for service_url or new_account_url, or > onAccountAdded/onAccountDeleted events. If Thunderbird's storing account > credentials all the extension needs to do is communicate with the service's > APIs. I can see service_url/new_account_url going away, but we should have an event when the account is added/removed. This would allow extensions to do some initial default config, or clean up storage if they need to. Looks like we are pretty much in consensus here, let me know if you have final thoughts on the account events. This is mostly your code Philipp, so it probably should have another reviewer, but let's see what you think first. By the way, I think this namespace should be cloudFile, and the manifest entry cloud_file. For consistency. Comment on attachment 9023511 [details] [diff] [review] 1481052-cloudfile-api-1.diff Review of attachment 9023511 [details] [diff] [review]: ----------------------------------------------------------------- My original patch has some other code I am missing, e.g. making sure the iframe used in the settings is a content iframe. Did I write that after I sent you the code, or did it go missing? I'll upload my latest patch for reference, feel free to merge things together. This is a great start, but I think we are not quite there yet. ::: mail/components/extensions/parent/ext-cloudfile.js @@ +117,5 @@ > + > + cancelFileUpload(file) { > + this.emit("uploadAborted", { > + id: this._fileIds.get(file.path), > + }); Not quite sure why you moved the abort logic into a new event handler and having the client deal with the AbortController? I thought this was pretty elegant and saved us from another event handler. @@ +163,5 @@ > + let contractID = "@mozilla.org/mail/cloudfile;1?type=" + this.extension.id.replace(/@/g, "-"); > + let self = this; > + > + // unregisterFactory does not clear the contract id from Components.classes, therefore re-use > + // the class id from the unregistered factory Registering via xpcom is fickle. I have a patch that adds code to the cloudfile provider so we can dynamically register providers, without XPCOM. Will upload in a sec. This patch was meant to allow both the xpcom way and the new way, but I think in the end we should probably just get rid of the xpcom way and require add-ons to update. Comment on attachment 9023511 [details] [diff] [review] 1481052-cloudfile-api-1.diff Review of attachment 9023511 [details] [diff] [review]: ----------------------------------------------------------------- ::: mail/components/extensions/parent/ext-cloudfile.js @@ +21,5 @@ > + reader.readAsArrayBuffer(blob); > + }); > +} > + > +class CloudFileProvider extends EventEmitter { add @implements {nsIMsgCloudFileProvider} @@ +27,5 @@ > + super(); > + > + this.extension = extension; > + this.accountKey = false; > + this.lastError = ""; don't see this used anywhere ::: mail/components/extensions/test/xpcshell/test_ext_cloudfile.js @@ +30,5 @@ > + Assert.ok(Cc[contract].createInstance(Ci.nsIMsgCloudFileProvider)); > + await extension.unload(); > + Assert.throws( > + () => Cc[contract].createInstance(Ci.nsIMsgCloudFileProvider), > + /NS_ERROR_XPC_CI_RETURNED_FAILURE/ ? (In reply to Philipp Kewisch [:Fallen] [:📆] from comment #12) > Not quite sure why you moved the abort logic into a new event handler and > having the client deal with the AbortController? I thought this was pretty > elegant and saved us from another event handler. It would be elegant but it doesn't work. The extension receives a copy of the object, so calling abort on the original does nothing. It might be different if we had a real AbortController, but we can't do that, so I don't know. The mock AbortController needs to go, that is true. Last time I tried to add Abort{Controller,Signal} to Cu.importGlobalProperties, it seemed to work without. Do we really not have the real AbortController available? I think there were options to ensure that some parameters are not copied but transferred raw. Geoff, for the record, I've started to update the patch I referenced to the API from comment 7. It seems this can be done without the UI changes for now. ping me if you want my current state. Comment on attachment 9023595 [details] [diff] [review] Dynamically register cloudfile (defunct) - v1 And this patch is totally wrong, it registers instances not classes. Probably needs a rethinking. Here's what I currently have. It's capable of multiple accounts per provider, but just wired to one for now. This is the current non-XPCOM registration patch. I've added observer service notifications so the UI can be fixed. Comment on attachment 9024647 [details] [diff] [review] 1481052-cloudfile-api-2.diff Review of attachment 9024647 [details] [diff] [review]: ----------------------------------------------------------------- ::: mail/components/extensions/parent/ext-cloudfile.js @@ +28,5 @@ > + super(); > + > + this.extension = extension; > + this.accountKey = false; > + this.lastError = ""; unused? @@ +104,5 @@ > + } > + return; > + } > + > + if (results && results.length) { prefer comaping to numbers, i.e. length > 0 @@ +146,5 @@ > + } catch (ex) { > + callback.onStopRequest(null, null, Cr.NS_ERROR_FAILURE); > + } > + > + if (results && results.length) { > 0 Comment on attachment 9024655 [details] [diff] [review] 1481052-cloudfile-register-1.diff Review of attachment 9024655 [details] [diff] [review]: ----------------------------------------------------------------- ::: mail/components/cloudfile/cloudFileAccounts.js @@ +80,5 @@ > + * Register a cloudfile provider, e.g. from a bootstrapped add-on. Registering can be done in two > + * ways, either implicitly through using the "cloud-files" XPCOM category, or explicitly using > + * this function. > + * > + * @param {Object} aImplementation The nsIMsgCloudFileProvider implementation to register @param {nsIMsgCloudFileProvider} I was looking into this as well, which I probably should not have without chatting with you. Here is a patch that pretty much brings it in line with the API in comment 7, applies on top of attachment 9023596 [details] [diff] [review]. Maybe there is something useful you can salvage from it. I also looked into how the AbortController thing could work, see the big comment in the patch. I'll stop now, let me know if you need my feedback. Okay, there's some good stuff in there. I don't understand why you're so keen on passing abort signals around. What's wrong with having an ordinary event and letting the extension deal with it? I'll work this into what I've got and then hopefully we're good to go. People are starting to get impatient. I guess part of it is wanting to figure out how to make it work. The other part is that from experience writing WebExtensions, I always found it annoying when I manually have to keep a Map to handle API calls and responses that are somewhat related. A while ago openerTabId was not implemented, so when I needed that property I had to manually remember what the opener was to get this property, implementing most of the tab event handlers. The other beauty is that when using AbortSignal, it can directly be passed to fetch(), which most if not all FileLink providers will be using. Anyway, I am ok with not using AbortSignal until I figure out how it could work. Just keep in mind that at some point we may hav to deal with backwards compatibility if we are making API changes. You could also consider checking if we can pass a Promise instead of an AbortSignal for now, so you could do: onFileUpload.addListener(async (acc, info, abortPromise) => { let controller = new AbortController(); abortPromise.catch(() => controller.abort()); await fetch({ signal: controller.signal, ... }); }); This would also save having to keep a map of AbortControllers and likely won't require using a child script. Or, maybe even just ignore the cancel event for now and add it later. What do you think? Some things I haven't mentioned yet: * Dynamically registered providers aren't true XPCOM objects, so they will QI to nsIMsgCloudFileProvider, but not instanceof it. * On a cancelled upload the extension can return { aborted: true } or throw the error that fetch throws when using abortSignal. * uploadCanceled is still spelled wrong. Outstanding things that I can remember right now: * The settings and management pages don't have access to "browser". I've tried to fix this for the preferences tab, but to do so it needs to be loaded in a chrome browser, which causes other things to break. * Many of the things in the schema could do with a description so the generated documentation is better. (Also the documentation generator could be better.) * My comment 11. Comment on attachment 9026345 [details] [diff] [review] 1481052-cloudfile-api-3.diff Review of attachment 9026345 [details] [diff] [review]: ----------------------------------------------------------------- I'm fine with the renaming you mentioned in comment 11. Please also make sure to file followups for the remaining items as necessary, and a followup to de-xpcom cloudfile accounts completely. r+ with these issues considered: :::. ::: mail/components/extensions/parent/ext-cloudfile.js @@ +256,5 @@ > + context, > + name: "cloudfile.onFileDeleted", > + register: fire => { > + let listener = (event, { id }) => { > + let account = convertAccount(self.provider); m-c caches the converted objects, can you file a followup to do this as well? @@ +305,5 @@ > + }; > + }, > + }).api(), > + > + async getAccount(accountId) { Other WX apis only use .get() .getAll() .update(). Any reason you changed this to add Account? Are you anticipating getting other things in the cloud file API? (In reply to Philipp Kewisch [:Fallen] [:📆] from comment #28) > :::. I've renamed it to aProvider. >. Not totally sure what you're asking here, but in the interests of not changing too much (since we want to take these changes all the way through to ESR), let's stick with what we've got. Once extensions are using the WebExt API, we can change whatever we like underneath it. > ::: >. > jsm#363 I think that might've been the case, but you're probably right about class info. > @@ . I can't actually work out what you're trying to say! > ::: mail/components/extensions/parent/ext-cloudfile.js > @@ +256,5 @@ > > + context, > > + name: "cloudfile.onFileDeleted", > > + register: fire => { > > + let listener = (event, { id }) => { > > + let account = convertAccount(self.provider); > > m-c caches the converted objects, can you file a followup to do this as well? I don't think it's necessary, convertAccount just assigns some variables into a new object. It's probably as fast as a Map lookup or something like it, and compared to the WebExt overhead is probably negligible. > @@ +305,5 @@ > > + }; > > + }, > > + }).api(), > > + > > + async getAccount(accountId) { > > Other WX apis only use .get() .getAll() .update(). Any reason you changed > this to add Account? Are you anticipating getting other things in the cloud > file API? browser.windows.getAll() gets all windows, browser.tabs.update() updates a tab, browser.cookies.get() gets a cookie, but browser.cloudFile.get() doesn't get a cloudFile, it gets a cloudFile account. That we called the events onAccountAdded and onAccountDeleted instead of onAdded and onDeleted implies we should name the methods getAccount, getAllAccounts and updateAccount. This is an interdiff between the last two patches. To prevent blowing interdiff's little mind, I undid the renaming of some files (cloudfile back to cloudFile). It's not very interesting, but I did change the test to use a second file for the upload abort, as it was causing problems "uploading" the same file twice. Pushed by geoff@darktrojan.net: FileLink WebExtensions API; r=Fallen For the record: This seems to have caused: TEST-UNEXPECTED-FAIL | [snip]/mozmill/cloudfile/test-cloudfile-add-account-dialog.js | test-cloudfile-add-account-dialog.js::test_accept_enabled_on_form_validation Pushed by mozilla@jorgk.com: Disable failing part of test-cloudfile-add-account-dialog.js. rs=bustage-fix Pushed by geoff@darktrojan.net: Work around test that causes Mozmill to die a horrible death; rs=bustage-fix Comment on attachment 9027742 [details] [diff] [review] 1481052-cloudfile-api-4.diff I guess we want this in the next beta so it can go into TB 60.4 ESR. Beta (TB 64 beta 4): I've rebased this for ESR, just waiting for TryServer to tell me I did something wrong. TryServer says go! Note EventManager invocation was changed between ESR60 and now: It was. That would've failed on Try if I'd actually added the test to the manifests. :/ Comment on attachment 9029539 [details] [diff] [review] 1481052-cloudfile-esr-2.diff OK, we'll take it for TB 60.4. TB 60.4 ESR: Is there any idea on timescale for releasing this? We heavily depend on filelink/DL and cannot upgrade without it working. We are aware of the security implications on not upgrading, but we can't do it without this, and no amount of brow beating will change that. We cannot lose the functionality as we use it extensively to transfer lots of large files which our business depends on. It puts us in an invidious and dangerous position, and we really don't understand why these hooks were never implemented by the Mozilla team prior to trying to force everyone to upgrade. There were some breakages we could live without, but not this. We fully appreciate the work of the developer who has added patches for this, and congratulate him on his persistence. Upgrade to what, John? The old FileLink functionality is still there in all current versions of Thunderbird. AFAIK, not a single one of the old FileLink addons is working today. This is because several related APIs got broken in TB60, and there's no way to rebuild without the legacy SDK: if you have additional information on that front, I would really appreciate it. Pretty sure the one we ship (box.com) is still working. (In reply to Magnus Melin [:mkmelin] from comment #48) > Pretty sure the one we ship (box.com) is still working. It's absolutely no use if you can't use it for legal reasons (and we can't) (In reply to Geoff Lankow (:darktrojan) from comment #46) > Upgrade to what, John? The old FileLink functionality is still there in all > current versions of Thunderbird. This gives a succint summary of the issue. Enough said. So the question remains as to when will 60.4 be released? We are stuck with our current version until we can use an upgraded plugin. We hope to get 60.4 released soon, probably within the next 2 weeks. The api won't of course magically make any add-ons work again. (In reply to Magnus Melin [:mkmelin] from comment #50) > We hope to get 60.4 released soon, probably within the next 2 weeks. OK thanks > The api won't of course magically make any add-ons work again. Of course not. But without the code no one can update their add-ons can they? Sure they can update without it too. The old api is still available for 60. (In reply to Magnus Melin [:mkmelin] from comment #52) > Sure they can update without it too. The old api is still available for 60. Err - I am no dev but as far as I understand it add-on devs are expected to use the webextensions API, and this bug is for file links via web extensions, which until 60.4 (now) is missing. So until this is released they can't properly build an updated add on as per Mozilla recommendations. Or am I missing something? I'll see what happens on this and and hope that something comes out of the woodwork. (we've rewritten one add-on we use for webextensions, but the filelink is a bit beyond us currently) You can get a preview of TB 60.4.0 for Windows here: Upload to box.com still works, I tested that. I doubt that all the file like add-ons are broken, here are a few that claim to be compatible with TB 60: NextCloud: and Mega: should work. is deliberately misleading. *All* "legacy" add-ons that have been made compatible with TB 60 still work. There is information on what needed to be done: Yes, you're missing the point that all the old providers were add-ons using the (old) internal api, which is still available. The add-on may need other adjustments of course. (In 60 we still support old style add-ons.) (In reply to Jorg K (GMT+1) from comment #54) > You can get a preview of TB 60.4.0 for Windows here: Don't use Windows anywhere. Haven't for a decade or more..... :-) > Upload to box.com still works, I tested that. I doubt that all the file like As per my previous, I can't upload files to a 3rd party service. Yes, that may be a bit of shock, but that is the way of the world. We only upload them to our own server only (GDPR & all that jazz). Hence DL is very good for us as we control what is going on, where, and when. > > is > deliberately misleading. *All* "legacy" add-ons that have been made > compatible with TB 60 still work. There is information on what needed to be > done: > I'll leave you to argue that point with the add-on developer as per his #c47 (I think that is the the developer) and his bug All I know is when we upgraded the add-on failed to work, and it appeared to be for the reasons he describes. The only solution for us was to downgrade. Well, I wanted to fix my old add-on to support TB60. It's true that the old API is still there, but the add-on broken in TB60 due to *other* API changes. That's the reason most old add-ons broke as well. I don't think I said anything misleading there. At the time of writing, *no* other filelink addon was supported on TB60. I simply wanted to fix the legacy one, but as I wrote, the tools to do so seems to be missing in TB60. I'm lost here. If you can point me now to some info to rebuild the xpi for TB60 I would be glad, so I can restore the functionality while waiting for the new webext interface to be available. John, you can have TB 60.4.0 preview for any platform you want. Wavexx, as per bug 1493528 comment #10, your add-on can be made to work in TB 60. (In reply to Jorg K (GMT+1) from comment #58) > John, you can have TB 60.4.0 preview for any platform you want. > OK thank you > Wavexx, as per bug 1493528 comment #10, your add-on can be made to work in > TB 60. I'll comment on the other bug - I'd be happy to get my PFY involved as he knows a bit more than me but with Christmas et al he won't be available for the next 3+ weeks So I'm giving a stab at updating the addon to the webext API. In the "settings" page which is used to setup an account, I don't see a way to perform validation before the account is added. The onAccountAdded event is emitted too late. Form validation seems to be somewhat working as required fields seem to prevent submission, but I don't want to perform account validation at each onchange event. Hooking up to onsubmit doesn't work, and returning false doesn't prevent submission. Ideas? The simple answer, and not really what you want, is that the settings page is about to be removed completely. In hindsight we shouldn't have even put it in the API, but we didn't realise that at the time. What you should do instead, is put all your configuration in the "management" page, and set the "configured" flag on the account when you're ready. Thunderbird 60 doesn't currently do anything with the flag, but I will change that in an upcoming release. I'm going to update the docs now, as they don't reflect what I've just said. The "Box" provider though does prevent submission until authentication is provided (cannot say what happens next, as I don't have a box account). I guess I shouldn't even attempt this if the page is going to be removed? In this case, should I just put an empty page? When will the single-instance limit be lifted? I have several cases where I'd like to select which server I want to use. I guess I shouldn't even attempt this if the page is going to be removed? In this case, should I just put an empty page? That's what I've been doing. For up-to-date Thunderbird versions you don't even need to do that. When will the single-instance limit be lifted? I have several cases where I'd like to select which server I want to use. Thunderbird 68. You could test in a beta version already. Technically it should be possible in TB60, but the UI is horribly broken, so don't even try it.
https://bugzilla.mozilla.org/show_bug.cgi?id=1481052
CC-MAIN-2019-22
refinedweb
4,961
65.52
Feature #17290open Syntax sugar for boolean keyword argument Description We frequently use keyword arguments just to pass true value out of the truthy/falsy options given. And in many such cases, the falsy option is set as the default, and only the truthy value is ever passed explicitly. I propose to have a syntax sugar to omit the value of a keyword argument. When omitted, it should be interpreted with value true. gets(chomp:) CSV.parse(" foo var ", strip:) should be equivalent to gets(chomp: true) CSV.parse(" foo var ", strip: true) Additionally, we may also extend this to pragmas. # frozen_string_literal: to be equivalent to: # frozen_string_literal: true Updated by marcandre (Marc-Andre Lafortune) 9 months ago I'm personally hoping that strip: could be syntax sugar for strip: strip... Updated by baweaver (Brandon Weaver) 9 months ago I would concur with Marc-Andre on this one, I believe punning would be a more valuable feature, especially for keyword arguments. Examples: def method_name(foo: 1, bar: 2, baz: 3) foo + bar + baz end foo = 1 bar = 2 baz = 3 method_name(foo:, bar:, baz:) # 1, 2, 3 # => 6 # Also punning in hashes which would be distinct from block syntax: { foo:, bar:, baz: } # => { foo: 1, bar: 2, baz: 3 } May open a separate ticket on those ideas or amend a current punning proposition Edit - opened a ticket Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/17290
CC-MAIN-2021-31
refinedweb
229
55.27
Bidding with Securities: Comment - Ashlee Palmer - 1 years ago - Views: Transcription 1 Bidding with Securities: Comment Yeon-Koo Che 1 Jinwoo Kim 2 1 Columbia University, Economics Department; 2 Yonsei University, School of Economics; 2 Abstract Peter DeMarzo, Ilan Kremer and Andrzej Skrzypacz (2005, henceforth DKS) thus yield lower expected revenue, than flatter ones. 3 Peter DeMarzo, Ilan Kremer and Andrzej Skrzypacz (2005, henceforth DKS) analyzed auctions in which bidders compete in securities, i.e., the winning bidder s payment includes a share of cash flow or (ex-post) value generated from the auctioned object. With attention restricted to feasible securities, 1 their main finding concerns the role of the steepness of securities in determining the seller s revenue. They show that the steeper the payment to the seller as a function of the realized value, the higher is the seller s expected revenue. A shift from a security, say a debt, to a steeper one, say equity or call option, flattens the surplus accruing to a bidder as a function of his future realized value, and this levels the competitive gaps between bidders. Hence, the competition becomes intensified. 2 In this comment, we wish to add to DKS analysis a caveat that the adverse selection problem should be an important part of security and auction design consideration. Unlike cash bids, security bids are difficult to evaluate when the seller does not know the buyer s exact type. For instance, a 40% share of an asset managed by a buyer may be less valuable than a 30% share of the same asset managed by a different buyer, if the latter is much more competent. Lacking such information, a seller may fall victim to adverse selection, choosing a wrong bidder. This problem has been noted by several authors such as William Samuelson (1987), Simon Board (2007) and Charles Zheng (2001) in the context of a fixed security design. Our concern here is its relevance for security design: We show that the adverse selection problem may lead to the rankings of alternative security designs and auction formats that are quite different from, and in fact opposite to, those found in DKS. In particular, under a reasonable circumstance, a steeper security is more vulnerable to adverse selection, and could result in a poorer revenue performance, than a flatter security. To illustrate, suppose there are two buyers, 1 and 2. Buyer i = 1, 2 has a project which requires initial investment of x i and generates a (gross) return of x i + v i. Suppose v 1 > v 2 0 and x 1 > x 2. That is, buyer 1 s project generates a higher return (so is efficient to select) but requires a higher investment than buyer 2 s project. This assumption is reasonable. A financially distressed or bankrupt firm will more likely turn profitable again under the management willing to infuse more cash, for instance. Or a firm that can generate a greater value from a under-performing target is more likely to have a higher opportunity cost (e.g., of pursuing other target) or stand-alone values. The assumption is equally compelling in other contexts, such as the government sale of oil leases. 1 A security is feasible if the shares accruing to the payee and payer are both nondecreasing in the realized gross return. As DKS observe, all standard financial claims satisfy this monotonicity condition. 2 This insight originates from Robert Hansen (1985) and is further developed by Matthew Rhodes-Kropf and S. Viswanathan (2000). 1 4 Assume a second-price auction is used. Suppose first the buyers bid in standard debt. Then, buyer i is willing to bid up to v i. Hence, buyer 1 will win and must pay v 2. Deterministic return means that the revenue for the seller is v 2, the same as that under the cash auction. Suppose next the buyers bid in the standard equity. Buyer i will then bid a share s i that will leave him with zero profit: (1 s i )(v i + x i ) x i = 0, or s i = v i x i + v i. The efficient buyer is no longer assured of winning. Buyer 1 wins if v 1 x 1 > v 2 x 2, but he loses if v 1 x 1 < v 2 x 2. In the former case, the seller s revenue is x 1+v 1 x 2 +v 2 v 2 > v 2, so it is higher than that under the cash or debt auction. But in the latter case, the seller receives x 2 + v 2 x 1 + v 1 v 1 < v 2, which is less than what either cash or debt auction would bring. Consider finally the steepest design, a call option. This is equivalent to the buyers competing for a debt issued by the seller. Again the logic of the second-price auction implies that buyer i will ask for repayment rate of x i, which will just pay for his investment cost. Since the low bid wins in this case, bidder 2 wins always! As a result, the seller receives max{v 2 + x 2 x 1, 0}, which is the least among all revenues from all other security auctions, regardless of who wins in the equity auction. 3 In what follows, we generalize this observation, establishing first the sense in which a steeper security is more vulnerable to adverse selection than a flatter security and second how the adverse selection problem affects the seller s revenue. 1 Model Each bidder s type is represented by his project, indexed by the net expected return, v [v, v]. The investment project with return v requires initial investment x(v), where x(v) 0 and since 3 When v1 x 1 < v2 x 2, the seller s revenue from equity auction is x 2 + v 2 x 1 + v 1 v 1 > max{v 2 + x 2 x 1, 0}, v 1 x 1+v 1 is increasing in v 1 and since the LHS equals the middle term when v 1 falls to v 2 + x 2 x 1. 2 5 x ( ) 0. The project v generates gross return Z, which is distributed over [z, z], where z x(v). Without loss, we can redefine v = E[Z v] x(v) to be the net expected return. Clearly, it is efficient for the buyer with the highest project return to be selected. Adverse selection arises when the seller fails to select the buyer efficiently. The ensuing model is precisely the same as DKS, except that we allow the investment cost x(v) to rise with the project type v. This feature is quite plausible, as motivated in the introduction, and also serves to identify the different extents to which alternative security designs and auction formats are vulnerable to adverse selection. As with DKS, we consider an ordered set of feasible securities S = {S(s, Z) : s [0, 1]}: For all v, E[S(s,Z) v] > 0. For simplicity, we write E[S(s, Z) v] as ES(s, v) and its derivatives s as ES v (s, v) and ES s (s, v). Also, as in DKS, we compare two ordered sets of securities, S 1 and S 2, in terms of steepness: S 1 is steeper than S 2 if for all S 1 S 1 and S 2 S 2, we have ES 1 v(s 1, v) > ES 2 v(s 2, v) whenever ES 1 (s 1, v) = ES 2 (s 2, v). Note that S 1 being steeper than S 2 implies that for all S 1 S 1 and S 2 S 2, ES 1 (s, v) and ES 2 (s, v) are single-crossing: Whenever ES 1 (s, v) = ES 2 (s, v) for some v, we have ES 1 (s, v ) > (<)ES 2 (s, v ) if v > (<)v. 2 Ranking Security Designs 2.1 Second-Price Auctions We first consider a second-price auction in which each bidder submits a security from an ordered set S and the seller selects a bidder with highest security (highest s), who then pays the second highest security. In a weakly dominant equilibrium, a type v submits a bid s(v) [0, 1] such that ES(s(v), v) v = 0. (1) Adverse selection manifests itself as the failure of monotonic equilibrium strategies. In particular, we focus on the possibility of extreme adverse selection where bidders employ decreasing bidding strategies. To this end, differentiate both sides of (1) with v to obtain ES s (s, v)s (v) + ES v (s, v) 1 = 0 or ES s (s, v)s (v) = 1 ES v (s, v). 3 6 Since ES s (s, v) > 0, s( ) will be decreasing if and only if ES v (s, v) > 1 whenever ES(s, v) = v. (2) It follows from this that a steeper security is more vulnerable to adverse selection than a flatter design: Proposition 1. Let s 1 ( ) and s 2 ( ) denote the equilibrium strategies under securities S 1 and S 2, respectively. Suppose that S 1 is steeper than S 2. Then, if s 2 ( ) is decreasing, s 1 ( ) is decreasing also. Also, if s 1 ( ) is increasing, s 2 ( ) too is increasing. Proof. Since s 1 ( ) and s 2 ( ) constitute the equilibrium under S 1 and S 2, respectively, we have ES 1 (s 1 (v), v) = v = ES 2 (s 2 (v), v) for all v. Then, ES 1 v(s 1 (v), v) > ES 2 v(s 2 (v), v) > 1, where the first inequality follows from S 1 being steeper than S 2 and the second from s 2 ( ) being decreasing. Thus, s 1 ( ) is decreasing. A similar argument can be used to show that if s 1 ( ) is increasing, s 2 ( ) is also increasing. The next proposition establishes a convenient sufficient condition for adverse selection to arise for some security design. It states that any security design as steep as, or steeper than, standard equity will induce a decreasing equilibrium, if the investment cost x(v) increases at a rate faster than the net return v. Proposition 2. A second-price equity auction induces a decreasing (resp. increasing) equilibrium bidding strategy if x(v)/v is increasing (resp. decreasing) in v. or Proof. Letting s denote the equity share, (1) becomes ES(s(v), v) = s(v)e[z v] = s(v)(v + x(v)) = v s(v) = v v + x(v) = x(v)/v, which is increasing (resp. decreasing) if x(v)/v is decreasing (resp. increasing). We now explore the revenue consequence of adverse selection. Proposition 3. Suppose that S 1 is steeper than S 2. Letting s 1 ( ) denote the equilibrium bidding strategy under S 1, if s 1 ( ) is decreasing, then the seller s revenue is lower with S 2 than with S 1. 4 7 Proof. We show that the desired revenue ranking holds in the ex-post sense. Fix a value profile at v 1,, v n and let v (r) denote the r-th highest value. Since s 1 ( ) is decreasing, a winner under S 1 pays ES 1 (s 1 (v (n 1) ), v (n) ) < ES 1 (s 1 (v (n) ), v (n) ) = v (n). First, we consider the case in which under S 2, a winner s value is v > v (n) and the second highest bidder s value is ṽ. Then, the winner pays ES 2 (s 2 (ṽ), v), which is higher than what is paid by the winner under S 1 since ES 2 (s 2 (ṽ), v) > ES 2 (s 2 (ṽ), v (n) ) ES 2 (s 2 (v (n) ), v (n) ) = v (n) > ES 1 (s 1 (v (n 1) ), v (n) ). Second, consider the case in which, under S 2, a winner s value is v (n) and the second highest bidder s value is ṽ. In this case, the winner under S 2 pays ES 2 (s 2 (ṽ), v (n) ) ES 2 (s 2 (v (n 1) ), v (n) ) > ES 1 (s 1 (v (n 1) ), v (n) ), where the second inequality follows from the facts that S 1 is steeper than S 2, ES 1 (s 1 (v (n 1) ), v (n 1) ) = ES 2 (s 2 (v (n 1) ), v (n 1) ) = v (n 1), and that v (n) < v (n 1). Not only is a steeper security design more vulnerable to adverse selection than a flatter one, but the former entails lower expected revenue than the latter, even when the latter too suffers adverse selection. The reason is that the steeper design magnifies the competitive differences of the bidders than a flatter design when decreasing equilibrium strategies are employed under both designs. As noted by DKS, a call option is the steepest, and standard debt is the flattest, among all feasible securities. Further, a cash payment is even flatter than a standard debt. Combining Propositions 2 and 3, we then arrive at rather surprising implications much in contrast with DKS: Corollary 1. Suppose x(v)/v is increasing in v. Then, for a second-price auction, (i) cash or debt yields higher expected revenue than equity or any securities steeper than standard equity. (ii) a call option yields the lowest expected revenue among all feasible securities. While the condition involves a restriction, part (ii) may likely hold under a weaker condition. For instance, the result will hold whenever a call option auction induces a decreasing equilibrium. 2.2 First-Price Auctions We now establish analogous results about security designs under a first-price auction. To this end, we assume that v is symmetrically distributed across bidders, following a distribution 5 8 F : [v, v] [0, 1] with a density f(v) > 0 for all v (v, v). equilibrium bidding strategy s F ( ) that is differentiable. We focus on a symmetric As before, we first establish the sense in which a steeper security design is more vulnerable than a flatter one to adverse selection. All proofs not provided here appear in Appendix. Proposition 4. Suppose that S 1 is steeper than S 2. Suppose also that the first-price auction with S 2 induces a decreasing equilibrium bidding strategy. Then, the equilibrium of the firstprice auction with S 1, if exists, must also be decreasing. As before, adverse selection affects revenue more adversely for a steeper security design than a flatter one, which stands in contrast with the finding of DKS. Proposition 5. Suppose that S 1 is steeper than S 2 and the first-price auction with S 2 induces a decreasing equilibrium strategy. Then, the seller revenue is lower with S 2 than with S 1 whenever the latter admits an equilibrium. 3 Ranking Auction Formats In this section, we study how adverse selection can affect the ranking of standard auction formats. We first show that there is a sense in which the first-price auction is more prone to adverse selection than a second-price auction: Proposition 6. Suppose the equilibrium bidding strategy of the second-price auction is decreasing. Then, any equilibrium bidding strategy of the first-price auction, if exists, must also be decreasing. This difference in the two standard auction formats has consequences for the seller s expected revenue. We now show that, much in contrast to DKS s finding, whenever the adverse selection plagues both formats under a given feasible security, a second-price auction yields a higher expected revenue than a first-price auction. To this end, following DKS, we call an ordered set, S, of securities super-convex if every S S is steeper than any security obtained from a (nontrivial) convex combination of securities in S, and they are convex if it is equal to its convex hull. Proposition 7. Suppose that S is super-convex (resp. convex) and the second-price auction induces a decreasing equilibrium bidding strategy. Then, the first-price auction generates a lower (resp. the same) expected revenue than the second-price auction whenever the former admits an equilibrium. 6 9 As noted in DKS, call options are super-convex, and equity is convex. Combining the results established thus far yields the following implications. Corollary 2. Suppose x(v)/v is increasing in v. (i) A standard (i.e., first- or second-price) cash or debt auction yields higher expected revenue than does any standard auction with equity or any securities steeper than equity. (ii) A first-price auction with call options yields lower expected revenue than does any standard auction with any securities. Proof. Given the condition, a second-price equity auction induces a decreasing equilibrium (Proposition 2). Since equity is convex, by Proposition 7, first- and second-price auctions are revenue equivalent under equity. Part (i) then follows from Propositions 3 and 7. We next prove part (ii). Note first that Propositions 1 and 2 and imply that a second-price auction in call options induces a decreasing equilibrium. Then, by Proposition 6, a first-price auction with call options induces a decreasing equilibrium. It then follows from Proposition 5 that a first-price auction with call options yields lower revenue than does a first-price auction with any regular securities. Next, since call options are super-convex (as noted by DKS), Proposition 7 implies that a first-price auction with call options yields lower revenue than a second-price auction with call options, which, by Corollary 1, in turn yields lower revenue than a second-price auction with any securities. 4 Concluding Remarks We conclude our comment with two remarks. First, we have focused on the extreme form of adverse selection in which the bidders adopt decreasing bidding strategies. This implicitly assumes that the seller commits to a given auction rule. If she makes no such commitment but rather selects the winner ex post optimally based on her inference from equilibrium strategies, then a separating equilibrium will unravel, so a decreasing equilibrium will not arise. While it is difficult to describe the resulting equilibrium precisely, our analysis seems relevant even in this latter case, for a couple of reasons. First, whenever the equilibrium involves a decreasing strategy in our model, the worst type can profitably mimic the equilibrium strategy of any type. This means that the bidder will not be selected efficiently, so adverse selection is unavoidable, in equilibrium. In this sense, focusing on the circumstance entailing a decreasing equilibrium (under our exogenous rule) serves as a useful proxy for the severity of adverse 7 10 selection even in this case. Second, adverse selection appears to have a serious revenue consequence for the seller even when she behaves ex post optimally. No matter how the seller chooses the winner, given adverse selection (under our exogenous rule), the seller s revenue must be low enough to leave the worst type with positive rents. 4 For this reason, adverse selection should be taken seriously in auction/security design. Second, the precise implications of adverse selection for security and auction design depend on the underlying model. Nevertheless, the particular model we have considered seems plausible. Further, its main feature that a higher return corresponds to a higher investment cost lends itself to a standard moral hazard interpretation. That is, an insight similar to those developed here will apply if the winning bidder s return depends on his costly effort. A steep security design could suppress the incentive for the efforts and thus could reduce the surplus accruing to the seller. Appendix Proof of Proposition 4. Let s i F ( ) denote the equilibrium strategy of the first-price auction with S i and s i ( ) that of the second-price auction so that s i ( ) is a solution of (1) with S i. As assumed, s 2 F ( ) is decreasing. The standard argument can be used to show that there is no atom in the support of s i F ( ), i = 1, 2. Step 1. v is the unique minimizer of s 1 F ( ). Proof. Suppose for a contradiction that s 1 F ( ) is minimized at some v < v so that v has to obtain zero payoff at equilibrium under S 1 since there is no atom. Since s 2 F ( ) is decreasing and thus v obtains zero equilibrium payoff with S 2, it must be that ES 2 (s 2 F (v), v) = v = ES 1 (s 1 (v), v). (Recall s i ( ) denotes the equilibrium strategy of the second-price auction with security S i, satisfying (1).) Given this, S 1 being steeper than S 2 implies that ES 2 (s 2 F (v), v) > ES 1 (s 1 (v), v) for all v < v. (3) 4 This again follows from the observation that if our exogenous auction rule induces a decreasing equilibrium, the worst type can profitably mimic the equilibrium strategy of any type under the endogenous selection rule. 8 11 So, we have v ES 1 (s 1 F (v), v ) v ES 1 (s 1 (v), v ) > v ES 2 (s 2 F (v), v ) > v ES 2 (s 2 F (v ), v ) 0 = v ES 1 (s 1 F (v ), v ), where the first inequality follows from s 1 F (v) s1 (v), the second from (3), the third from s 2 F ( ) decreasing, the fourth from s2 F (v ) being the equilibrium bid for v with S 2, and the last equality from v earning zero equilibrium payoff with S 1. The above inequality results in ES 1 (s 1 F (v), v ) < ES 1 (s 1 F (v ), v ) or s 1 F (v) < s1 F (v ), contradicting that s 1 F ( ) is minimized at v. To simplify notation, s 1 F ( ) is denoted as s F ( ) from now on. Then, Step 1 implies s F (v) < s F (v), so the proof will be complete if it can be shown that s F (v) 0 for all v (v, v), which is established in the next two steps. Step 2. Let V 0 = {v (v, v) : s F (v) = 0 and v is a local minimum}. Then, V 0 =. Proof. Suppose that V 0 is not empty. One can then find v 0 V 0 such that s F (v 0 ) s F (v ) for all v V 0. One can also find some v 1 (v 0, v) such that s F (v 1 ) = s F (v 0 ) and s F (v 1) < 0. 5 Consider a downward deviation by v 1 to slightly lower s and mimic v 1 + ɛ with small ɛ > 0. It is straightforward that as ɛ 0, the marginal cost from decrease in the winning probability is d := (n 1)f(v 1 )F n 2 (v 1 )[v 1 E(s F (v 1 ), v 1 )] while the marginal benefit from decrease in the expected payment is d + := (1 F (v 1 )) n 1 s F (v 1 )ES s (s F (v 1 ), v 1 ). For this deviation to be unprofitable, we must have d d +. Consider now an upward deviation by v 1 to slightly raise s and mimic v 1 ɛ for small ɛ > 0. We will then be able to find some ɛ 1 (ɛ), ɛ 2 (ɛ) > 0 such that s F (v 0 ɛ 1 (ɛ)) = s F (v 0 + ɛ 2 (ɛ)) = s F (v 1 ɛ). With this deviation, the winning probability is equal to n 1 ( ) n 1 w(ɛ) := (F (v 0 + ɛ 2 (ɛ)) F (v 0 ɛ 1 (ɛ))) k (1 F (v 1 ɛ)) n 1 k. k k=0 5 This is possible since v is a unique minimizer of s F ( ) 9 12 Thus, w(ɛ) w(0) lim ɛ 0 ɛ Here, the strict inequality holds true since lim ɛ 0 (1 F (v 1 ɛ)) n 1 (1 F (v 1 )) n 1 ɛ + lim ɛ 0 (n 1)(F (v 0 + ɛ 2 (ɛ)) F (v 0 ɛ 1 (ɛ)))(1 F (v 1 ɛ)) n 2 ɛ >(n 1)f(v 1 )F n 2 (v 1 ). (4) (n 1)(F (v 0 + ɛ 2 (ɛ)) F (v 0 ɛ 1 (ɛ)))(1 F (v 1 ɛ)) n 2 lim ɛ 0 ɛ =(n 1)f(v 0 )(ɛ 1(0) + ɛ 2(0))(1 F (v 1 )) n 2 > 0, which is in turn due to the fact that ɛ 1(0) and ɛ 2(0) are positive since v 0 is a local minimizer of s F ( ). 6 By (4), the marginal increase in the winning probability from the upward deviation is greater than (n 1)f(v 1 )F n 2 (v 1 ), which means that the associated marginal benefit is greater than d. Clearly, the marginal cost from increase in the expected payment is equal to d +. Considering d d +, however, this implies that the upward deviation is profitable. Given Step 2, that s F (v) 0 for all v (v, v) can be established if we rule out the case in which s F ( ) is hump-shaped, which leads to Step 3. Step 3. There is no v (v, v) at which s F ( ) achieves a (global) maximum. Proof. Suppose to the contrary that there is a global maximizer v m (v, v). One must be then able to find some v 1 (v m, v) such that s F (v 1 ) = s F (v) and s F (v 1) < 0. Considering a downward deviation by v 1, one can obtain the expressions for associated marginal benefit and cost that are the same as d + and d defined in Step 2. Also, it must be that d d +. Then, a similar argument to the one that resulted in (4) above, can be used to show that an upward deviation will lead to the marginal increase in winning probability, which is greater than (n 1)f(v 1 )F n 2 (v 1 ). So, the marginal benefit from the upward deviation is greater than d while the marginal cost is equal to d +, which implies that the upward deviation is profitable. Proof of Proposition 5. According to Proposition 4, given the condition, the equilibrium bidding strategy under S 1, whenever it exists, must be decreasing. We then mimic the proof of Proposition 1 in DKS. To that end, let s i F (v) and U i (v) denote the type v s equilibrium bid and equilibrium payoff, respectively, in the first-price auction with S i. 6 More precisely, ɛ 1 (ɛ) = v 0 s 1 F (s F (v 1 ɛ)) and thus ɛ 1(0) = s F (v1) s F (v0) =, and similarly for ɛ 2(0). 10 13 Claim 1. If U 2 (v) < U 1 (v) for some v < v, then U 2 (v ) < U 1 (v ) for all v < v. du 1 (ˆv) dv Proof. Suppose not. Then, there must be some ˆv < v such that U 1 (ˆv) = U 2 (ˆv) and du 2 (ˆv). This implies that ES 1 (s 1 dv F (ˆv), ˆv) = ES2 (s 2 F (ˆv), ˆv) and, by the envelope theorem, du 1 (ˆv) dv = (1 F (ˆv)) n [1 ES1 (s 1 F (ˆv), ˆv) ] (1 F (ˆv)) n [1 ES2 (s 2 F (ˆv), ˆv) ] = du 2 (ˆv) v v dv or ES1 (s 1 F (ˆv),ˆv) v ES2 (s 2 F (ˆv),ˆv), which is a contradiction since S 1 is steeper than S 2. v Now consider the equilibrium of the first-price auction under S 2 with a modified distribution which is the same as F ( ) except that the support is truncated at v ɛ for small ɛ and there is a mass equal to 1 F (v ɛ) at v ɛ. Letting U 2 ɛ ( ) denote the payoff for this equilibrium, we have U 2 ɛ (v ɛ) = 0 < U 1 (v ɛ). Note that the above claim still holds between U 2 ɛ ( ) and U 1 ( ) with v being replaced by v ɛ. Thus, we have U 2 ɛ (v) < U 1 (v) for all v v ɛ. By making ɛ converge to zero, we conclude that the buyers payoffs are higher with S 1, which implies that the seller s revenue is higher with S 2, as desired. Proof of Proposition 6. all v. Recall s( ) denote the solution of (1). Clearly, s F (v) s(v) for As in Step 1 of Proposition 4, we first prove that the highest type v must be the unique minimizer of s F ( ). Suppose to the contrary that there is some v < v at which s F ( ) is minimized and that the interim equilibrium payoff is zero or v = ES(s F (v ), v ) so s F (v ) = s(v ). However, the fact that s( ) is decreasing and s F (v ) = s(v ) implies by (2) that v ES(s F (v ), v) < 0 for some v in the neighborhood of v, which leads to a contradiction that v ES(s(v), v) v ES(s F (v), v) v ES(s F (v ), v) < 0, since s(v) s F (v) s F (v ). Thus, we conclude that v is the unique minimizer of s F ( ). The rest of proof then follows the same line of argument as that in Step 2 and Step 3 of the proof of Proposition 4 and is thus omitted. Proof of Proposition 7. Note that according to Proposition 6, any equilibrium bidding strategy of the first-price auction must be decreasing. Then, the proof follows the same line of argument as the proof of Proposition 5, for the super-convexity of S implies that ES(s F (v), v) v if ES(s F (v), v) = E[ES(s(v ), v) v > v]. > E[ES(s(v ), v) v > v] v (5) 11 14 The proof of revenue equivalence between the first- and second-price auctions with convex securities follows from the observation that if S is convex, then the inequality in (5) becomes an equality, which makes all the inequalities in the proof of Proposition 5 into equalities. References [1] Board, Simon. Bidding into the Red: A Model of Post-Auction Bankruptcy, Journal of Finance, 2007 LXII, [2] DeMarzo, Peter, Kremer, Ilan and Skrzypacz, Andrzej. Bidding with Securities Auctions and Security Design, American Economic Review, 2005, 95, [3] Hansen, Robert G. Auctions with Contingent Payments, American Economic Review, 1985, 75, [4] Rhodes-Kropf, Matthew, and Viswanathan S. Financing Auction Bids, Journal of Finance, 2000, 55, [5] Samuelson, William. Auctions with Contingent Payments: Comment, American Economic Review, 1987, 77, [6] Zheng, Charles Z. High Bids and Broke Winners, Journal of Economic Theory, 2001, 100, Bidding with Securities: Auctions and Security Design. Bidding with Securities: Auctions and Security Design. Peter M. DeMarzo, Ilan Kremer and Andrzej Skrzypacz * Stanford Graduate School of Business September, 2004 We study security-bid auctions in which How to Sell a (Bankrupt) Company How to Sell a (Bankrupt) Company Francesca Cornelli (London Business School and CEPR) Leonardo Felli (London School of Economics) March 2000 Abstract. The restructuring of a bankrupt company often entails Hybrid Auctions Revisited Hybrid Auctions Revisited Dan Levin and Lixin Ye, Abstract We examine hybrid auctions with affiliated private values and risk-averse bidders, and show that the optimal hybrid auction trades off the Bayesian Nash Equilibrium . Bayesian Nash Equilibrium . In the final two weeks: Goals Understand what a game of incomplete information (Bayesian game) is Understand how to model static Bayesian games Be able to apply Bayes Nash Internet Advertising and the Generalized Second Price Auction: Internet Advertising and the Generalized Second Price Auction: Selling Billions of Dollars Worth of Keywords Ben Edelman, Harvard Michael Ostrovsky, Stanford GSB Michael Schwarz, Yahoo! Research A Few Chapter 9. Auctions. 9.1 Types of Auctions From the book Networks, Crowds, and Markets: Reasoning about a Highly Connected World. By David Easley and Jon Kleinberg. Cambridge University Press, 2010. Complete preprint on-line at Bidding into the Red: A Model of Post-Auction Bankruptcy THE JOURNAL OF FINANCE VOL. LXII, NO. 6 DECEMBER 2007 Bidding into the Red: A Model of Post-Auction Bankruptcy SIMON BOARD ABSTRACT This paper investigates auctions where bidders have limited liability.). 6.207/14.15: Networks Lectures 19-21: Incomplete Information: Bayesian Nash Equilibria, Auctions and Introduction to Social Learning 6.207/14.15: Networks Lectures 19-21: Incomplete Information: Bayesian Nash Equilibria, Auctions and Introduction to Social Learning Daron Acemoglu and Asu Ozdaglar MIT November 23, 25 and 30, 2009 1 Introduction Market-Clearing Model Chapter 5 The Market-Clearing Model Most of the models that we use in this book build on two common assumptions. First, we assume that there exist markets for all goods present in the economy, and, WEAK DOMINANCE: A MYSTERY CRACKED WEAK DOMINANCE: A MYSTERY CRACKED JOHN HILLAS AND DOV SAMET Abstract. What strategy profiles can be played when it is common knowledge that weakly dominated strategies are not played? A comparison to Economics of Insurance Economics of Insurance In this last lecture, we cover most topics of Economics of Information within a single application. Through this, you will see how the differential informational assumptions allow Persuasion by Cheap Talk - Online Appendix Persuasion by Cheap Talk - Online Appendix By ARCHISHMAN CHAKRABORTY AND RICK HARBAUGH Online appendix to Persuasion by Cheap Talk, American Economic Review Our results in the main text concern the case. THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages MTH6120 Further Topics in Mathematical Finance Lesson 2 MTH6120 Further Topics in Mathematical Finance Lesson 2 Contents 1.2.3 Non-constant interest rates....................... 15 1.3 Arbitrage and Black-Scholes Theory....................... 16 1.3.1 Informal Auctions with Endogenous Selling Auctions with Endogenous Selling Nicolae Gârleanu University of Pennsylvania and NBER Lasse Heje Pedersen New York University, CEPR, and NBER Current Version: June 20, 2006 Abstract The seminal paper by Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing 10 Evolutionarily Stable Strategies 10 Evolutionarily Stable Strategies There is but a step between the sublime and the ridiculous. Leo Tolstoy In 1973 the biologist John Maynard Smith and the mathematician G. R. Price wrote an article in An ON PRICE CAPS UNDER UNCERTAINTY ON PRICE CAPS UNDER UNCERTAINTY ROBERT EARLE CHARLES RIVER ASSOCIATES KARL SCHMEDDERS KSM-MEDS, NORTHWESTERN UNIVERSITY TYMON TATUR DEPT. OF ECONOMICS, PRINCETON UNIVERSITY APRIL 2004 Abstract. This paper UCLA. Department of Economics Ph. D. Preliminary Exam Micro-Economic Theory UCLA Department of Economics Ph. D. Preliminary Exam Micro-Economic Theory (SPRING 2011) Instructions: You have 4 hours for the exam Answer any 5 out of the 6 questions. All questions are weighted 0.0.2 Pareto Efficiency (Sec. 4, Ch. 1 of text) September 2 Exercises: Problem 2 (p. 21) Efficiency: p. 28-29: 1, 4, 5, 6 0.0.2 Pareto Efficiency (Sec. 4, Ch. 1 of text) We discuss here a notion of efficiency that is rooted in the individual preferences ECON 312: Oligopolisitic Competition 1. Industrial Organization Oligopolistic Competition ECON 312: Oligopolisitic Competition 1 Industrial Organization Oligopolistic Competition Both the monopoly and the perfectly competitive market structure has in common is that neither has to concern itself Efficient Competition through Cheap Talk: Efficient Competition through Cheap Talk: Competing Auctions and Competitive Search without Ex Ante Price Commitment Kyungmin Kim and Philipp Kircher November 2013 Abstract We consider a frictional two-sided Airport privatization competition including domestic airline. networks Airport privatization competition including domestic airline networks Akio Kawasaki Faculty of Education, Kagoshima University Abstract This paper addresses the problem of hub airport privatization, similar Review of Basic Options Concepts and Terminology Review of Basic Options Concepts and Terminology March 24, 2005 1 Introduction The purchase of an options contract gives the buyer the right to buy call options contract or sell put options contract some, VALUE 11.125%. $100,000 2003 (=MATURITY NOTES H IX. How to Read Financial Bond Pages Understanding of the previously discussed interest rate measures will permit you to make sense out of the tables found in the financial sections of newspapers An Empirical Analysis of Insider Rates vs. Outsider Rates in Bank Lending An Empirical Analysis of Insider Rates vs. Outsider Rates in Bank Lending Lamont Black* Indiana University Federal Reserve Board of Governors November 2006 ABSTRACT: This paper analyzes empirically the Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents William H. Sandholm January 6, 22 O.. Imitative protocols, mean dynamics, and equilibrium selection In this section, we consider ASimpleMarketModel. 2.1 Model Assumptions. Assumption 2.1 (Two trading dates) 2 ASimpleMarketModel In the simplest possible market model there are two assets (one stock and one bond), one time step and just two possible future scenarios. Many of the basic ideas of mathematical finance, Discrete Strategies in Keyword Auctions and their Inefficiency for Locally Aware Bidders Discrete Strategies in Keyword Auctions and their Inefficiency for Locally Aware Bidders Evangelos Markakis Orestis Telelis Abstract We study formally two simple discrete bidding strategies in the context Efficient Competition through Cheap Talk: The Case of Competing Auctions Efficient Competition through Cheap Talk: The Case of Competing Auctions Kyungmin Kim and Philipp Kircher December 2014 Abstract We consider a large market where auctioneers with private reservation values Paid Placement: Advertising and Search on the Internet Paid Placement: Advertising and Search on the Internet Yongmin Chen y Chuan He z August 2006 Abstract Paid placement, where advertisers bid payments to a search engine to have their products appear next Games of Incomplete Information Games of Incomplete Information Jonathan Levin February 00 Introduction We now start to explore models of incomplete information. Informally, a game of incomplete information is a game where the players Profit Maximization. 2. product homogeneity Perfectly Competitive Markets It is essentially a market in which there is enough competition that it doesn t make sense to identify your rivals. There are so many competitors that you cannot single out PAID PLACEMENT: ADVERTISING AND SEARCH ON THE INTERNET* The Economic Journal, 121 (November), F309 F328. Doi: 10.1111/j.1468-0297.2011.02466.x. Ó 2011 The Author(s). The Economic Journal Ó 2011 Royal Economic Society. Published by Blackwell Publishing, 9600 Section 365, Mandatory Bankruptcy Rules and Inefficient Continuance JLEO, V15 N2 441 Section 365, Mandatory Bankruptcy Rules and Inefficient Continuance Yeon-Koo Che University of Wisconsin-Madison and Universitat Autònoma de Barcelona Alan Schwartz Yale University Overview: Auctions and Bidding. Examples of Auctions Overview: Auctions and Bidding Introduction to Auctions Open-outcry: ascending, descending Sealed-bid: first-price, second-price Private Value Auctions Common Value Auctions Winner s curse Auction design An Introduction to Sponsored Search Advertising An Introduction to Sponsored Search Advertising Susan Athey Market Design Prepared in collaboration with Jonathan Levin (Stanford) Sponsored Search Auctions Google revenue in 2008: $21,795,550,000. Hal 21: The Discounted Utility Model Chapter 21: The Discounted Utility Model 21.1: Introduction This is an important chapter in that it introduces, and explores the implications of, an empirically relevant utility function representing intertemporal Insurance. Michael Peters. December 27, 2013 Insurance Michael Peters December 27, 2013 1 Introduction In this chapter, we study a very simple model of insurance using the ideas and concepts developed in the chapter on risk aversion. You may recall Other observable variables as arguments besides S. Valuation of options before expiration Consider European options with time t until expiration. Value now of receiving c T at expiration? (Value now of receiving p T at expiration?) Have candidate model Taylor Series and Asymptotic Expansions Taylor Series and Asymptotic Epansions The importance of power series as a convenient representation, as an approimation tool, as a tool for solving differential equations and so on, is pretty obvious. Multi-variable Calculus and Optimization Multi-variable Calculus and Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Multi-variable Calculus and Optimization 1 / 51 EC2040 Topic 3 - Multi-variable Calculus
http://docplayer.net/31029830-Bidding-with-securities-comment.html
CC-MAIN-2018-43
refinedweb
6,736
57.1
Usage¶ This module supports the SMB3 family of advanced network protocols (as well as older dialects, originally called "CIFS" or SMB1). The CIFS VFS module for Linux supports many advanced network filesystem features such as hierarchical DFS like namespace, hardlinks, locking and more. It was designed to comply with the SNIA CIFS Technical Reference (which supersedes the 1992 X/Open SMB Standard) as well as to perform best practice practical interoperability with Windows 2000, Windows XP, Samba and equivalent servers. This code was developed in participation with the Protocol Freedom Information Foundation. CIFS and now SMB3 has now become a defacto standard for interoperating between Macs and Windows and major NAS appliances. Please see MS-SMB2 (for detailed SMB2/SMB3/SMB3.1.1 protocol specification) and for more details. For questions or bug reports please contact: See the project page at: Build instructions¶ For Linux: Download the kernel (e.g. from and change directory into the top of the kernel directory tree (e.g. /usr/src/linux-2.5.73) make menuconfig (or make xconfig) select cifs from within the network filesystem choices save and exit make Installation instructions¶ If you have built the CIFS vfs as module (successfully) simply type make modules_install (or if you prefer, manually copy the file to the modules directory e.g. /lib/modules/2.4.10-4GB/kernel/fs/cifs/cifs.ko). If you have built the CIFS vfs into the kernel itself, follow the instructions for your distribution on how to install a new kernel (usually you would simply type make install). If you do not have the utility mount.cifs (in the Samba 4.x source tree and on the CIFS VFS web site) copy it to the same directory in which mount helpers reside (usually /sbin). Although the helper software is not required, mount.cifs is recommended. Most distros include a cifs-utils package that includes this utility so it is recommended to install this. Note that running the Winbind pam/nss module (logon service) on all of your Linux clients is useful in mapping Uids and Gids consistently across the domain to the proper network user. The mount.cifs mount helper can be found at cifs-utils.git on git.samba.org If cifs is built as a module, then the size and number of network buffers and maximum number of simultaneous requests to one server can be configured. Changing these from their defaults is not recommended. By executing modinfo: modinfo kernel/fs/cifs/cifs.ko on kernel/fs/cifs/cifs.ko the list of configuration changes that can be made at module initialization time (by running insmod cifs.ko) can be seen. Recommendations¶ To improve security the SMB2.1 dialect or later (usually will get SMB3) is now the new default. To use old dialects (e.g. to mount Windows XP) use "vers=1.0" on mount (or vers=2.0 for Windows Vista). Note that the CIFS (vers=1.0) is much older and less secure than the default dialect SMB3 which includes many advanced security features such as downgrade attack detection and encrypted shares and stronger signing and authentication algorithms. There are additional mount options that may be helpful for SMB3 to get improved POSIX behavior (NB: can use vers=3.0 to force only SMB3, never 2.1): mfsymlinksand cifsacland idsfromsid Allowing User Mounts¶ To permit users to mount and unmount over directories they own is possible with the cifs vfs. A way to enable such mounting is to mark the mount.cifs utility as suid (e.g. chmod +s /sbin/mount.cifs). To enable users to umount shares they mount requires mount.cifs version 1.4 or later an entry for the share in /etc/fstab indicating that a user may unmount it e.g.: //server/usersharename /mnt/username cifs user 0 0 Note that when the mount.cifs utility is run suid (allowing user mounts), in order to reduce risks, the nosuid mount flag is passed in on mount to disallow execution of an suid program mounted on the remote target. When mount is executed as root, nosuid is not passed in by default, and execution of suid programs on the remote target would be enabled by default. This can be changed, as with nfs and other filesystems, by simply specifying nosuid among the mount options. For user mounts though to be able to pass the suid flag to mount requires rebuilding mount.cifs with the following flag: CIFS_ALLOW_USR_SUID There is a corresponding manual page for cifs mounting in the Samba 3.0 and later source tree in docs/manpages/mount.cifs.8 Allowing User Unmounts¶ To permit users to ummount mounts, unless umount is invoked with -i (which will avoid invoking a umount helper). As with mount.cifs, to enable user unmounts umount.cifs must be marked as suid (e.g. chmod +s /sbin/umount.cifs) or equivalent (some distributions allow adding entries to a file to the /etc/permissions file to achieve the equivalent suid effect). For this utility to succeed the target path must be a cifs mount, and the uid of the current user must match the uid of the user who mounted the resource. Also note that the customary way of allowing user mounts and unmounts is (instead of using mount.cifs and unmount.cifs as suid) to add a line to the file /etc/fstab for each //server/share you wish to mount, but this can become unwieldy when potential mount targets include many or unpredictable UNC names. Samba Considerations¶ Most current servers support SMB2.1 and SMB3 which are more secure, but there are useful protocol extensions for the older less secure CIFS dialect, so to get the maximum benefit if mounting using the older dialect (CIFS/SMB1), we recommend using a server that supports the SNIA CIFS Unix Extensions standard (e.g. almost any version of Samba ie version 2.2.5 or later) but the CIFS vfs works fine with a wide variety of CIFS servers. Note that uid, gid and file permissions will display default values if you do not have a server that supports the Unix extensions for CIFS (such as Samba 2.2.5 or later). To enable the Unix CIFS Extensions in the Samba server, add the line: unix extensions = yes to your smb.conf file on the server. Note that the following smb.conf settings are also useful (on the Samba server) when the majority of clients are Unix or Linux: case sensitive = yes delete readonly = yes ea support = yes Note that server ea support is required for supporting xattrs from the Linux cifs client, and that EA support is present in later versions of Samba (e.g. 3.0.6 and later (also EA support works in all versions of Windows, at least to shares on NTFS filesystems). Extended Attribute (xattr) support is an optional feature of most Linux filesystems which may require enabling via make menuconfig. Client support for extended attributes (user xattr) can be disabled on a per-mount basis by specifying nouser_xattr on mount. The CIFS client can get and set POSIX ACLs (getfacl, setfacl) to Samba servers version 3. Some administrators may want to change Samba's smb.conf map archive and create mask parameters from the default. Unless the create mask is changed newly created files can end up with an unnecessarily restrictive default mode, which may not be what you want, although if the CIFS Unix extensions are enabled on the server and client, subsequent setattr calls (e.g. chmod) can fix the mode. Note that creating special devices (mknod) remotely may require specifying a mkdev function to Samba if you are not using Samba 3.0.6 or later. For more information on these see the manual pages ( man smb.conf) on the Samba server system. Note that the cifs vfs, unlike the smbfs vfs, does not read the smb.conf on the client system (the few optional settings are passed in on mount via -o parameters instead). Note that Samba 2.2.7 or later includes a fix that allows the CIFS VFS to delete open files (required for strict POSIX compliance). Windows Servers already supported this feature. Samba server does not allow symlinks that refer to files outside of the share, so in Samba versions prior to 3.0.6, most symlinks to files with absolute paths (ie beginning with slash) such as: ln -s /mnt/foo bar would be forbidden. Samba 3.0.6 server or later includes the ability to create such symlinks safely by converting unsafe symlinks (ie symlinks to server files that are outside of the share) to a samba specific format on the server applications running on the same server as Samba. Use instructions¶ Once the CIFS VFS support is built into the kernel or installed as a module (cifs.ko), you can use mount syntax like the following to access Samba or Mac or Windows servers: mount -t cifs //9.53.216.11/e$ /mnt -o username=myname,password=mypassword Before -o the option -v may be specified to make the mount.cifs mount helper display the mount steps more verbosely. After -o the following commonly used cifs vfs specific options are supported: username=<username> password=<password> domain=<domain name> Other cifs mount options are described below. Use of TCP names (in addition to ip addresses) is available if the mount helper (mount.cifs) is installed. If you do not trust the server to which are mounted, or if you do not have cifs signing enabled (and the physical network is insecure), consider use of the standard mount options noexec and nosuid to reduce the risk of running an altered binary on your local system (downloaded from a hostile server or altered by a hostile router). Although mounting using format corresponding to the CIFS URL specification is not possible in mount.cifs yet, it is possible to use an alternate format for the server and sharename (which is somewhat similar to NFS style mount syntax) instead of the more widely used UNC format (i.e. \servershare): mount -t cifs tcp_name_of_server:share_name /mnt -o user=myname,pass=mypasswd When using the mount helper mount.cifs, passwords may be specified via alternate mechanisms, instead of specifying it after -o using the normal pass= syntax on the command line: 1) By including it in a credential file. Specify credentials=filename as one of the mount options. Credential files contain two lines: username=someuser password=your_password By specifying the password in the PASSWD environment variable (similarly the user name can be taken from the USER environment variable). By specifying the password in a file by name via PASSWD_FILE By specifying the password in a file by file descriptor via PASSWD_FD If no password is provided, mount.cifs will prompt for password entry Restrictions¶ Servers must support either "pure-TCP" (port 445 TCP/IP CIFS connections) or RFC 1001/1002 support for "Netbios-Over-TCP/IP." This is not likely to be a problem as most servers support this. Valid filenames differ between Windows and Linux. Windows typically restricts filenames which contain certain reserved characters (e.g.the character : which is used to delimit the beginning of a stream name by Windows), while Linux allows a slightly wider set of valid characters in filenames. Windows servers can remap such characters when an explicit mapping is specified in the Server's registry. Samba starting with version 3.10 will allow such filenames (ie those which contain valid Linux characters, which normally is the default for SMB3). This remap ( mapposix) range is also compatible with Mac (and "Services for Mac" on some older Windows). CIFS VFS Mount Options¶ A partial list of the supported mount options follows: - username - The user name to use when trying to establish the CIFS session. - The user password. If the mount helper is installed, the user will be prompted for password if not supplied. - ip - The ip address of the target server - unc - The target server Universal Network Name (export) to mount. - domain - Set the SMB/CIFS workgroup name prepended to the username during CIFS session establishment - forceuid - Set the default uid for inodes to the uid passed in on mount.. (default) - forcegid - (similar to above but for the groupid instead of uid) (default) - noforceuid - Fill in file owner information (uid) by requesting it from the server if possible. With this option, the value given in the uid= option (on mount) will only be used if the server can not support returning uids on inodes. - noforcegid - (similar to above but for the group owner, gid, instead of uid) - uid - Set the default uid for inodes, and indicate to the cifs kernel driver which local user mounted. If the server supports the unix extensions the default uid is not used to fill in the owner fields of inodes (files) unless the forceuidparameter is specified. - gid - Set the default gid for inodes (similar to above). - file_mode - If CIFS Unix extensions are not supported by the server this overrides the default mode for file inodes. - fsc - Enable local disk caching using FS-Cache (off by default). This option could be useful to improve performance on a slow link, heavily loaded server and/or network where reading from the disk is faster than reading from the server (over the network). This could also impact scalability positively as the number of calls to the server are reduced. However, local caching is not suitable for all workloads for e.g. read-once type workloads. So, you need to consider carefully your workload/scenario before using this option. Currently, local disk caching is functional for CIFS files opened as read-only. - dir_mode - If CIFS Unix extensions are not supported by the server this overrides the default mode for directory inodes. - port - attempt to contact the server on this tcp port, before trying the usual ports (port 445, then 139). - iocharset - Codep. - rsize - default - default write size (default 57344) maximum wsize currently allowed by CIFS is 57344 (fourteen 4096 byte pages) - actimeo=n - attribute cache timeout in seconds (default 1 second). After this timeout, the cifs client requests fresh attribute information from the server. This option allows to tune the attribute cache timeout to suit the workload needs. Shorter timeouts mean better the cache coherency, but increased number of calls to the server. Longer timeouts mean reduced number of calls to the server at the expense of less stricter cache coherency checks (i.e. incorrect attribute cache for a short period of time). - rw - mount the network share read-write (note that the server may still consider the share read-only) - ro - mount network share read-only - version - used to distinguish different versions of the mount helper utility (not typically needed) - sep - if first mount option (after the -o), overrides the comma as the separator between the mount parms. e.g.:-o user=myname,password=mypassword,domain=mydom could be passed instead with period as the separator by:-o sep=.user=myname.password=mypassword.domain=mydom this might be useful when comma is contained within username or password or domain. This option is less important when the cifs mount helper cifs.mount (version 1.1 or later) is used. - nosuid - Do not allow remote executables with the suid bit program to be executed. This is only meaningful for mounts to servers such as Samba which support the CIFS Unix Extensions. If you do not trust the servers in your network (your mount targets) it is recommended that you specify this option for greater security. - exec - Permit execution of binaries on the mount. - noexec - Do not permit execution of binaries on the mount. - dev - Recognize block devices on the remote mount. - nodev - Do not recognize devices on the remote mount. - suid - Allow remote files on this mountpoint with suid enabled to be executed (default for mounts when executed as root, nosuid is default for user mounts). - credentials - Although ignored by the cifs kernel component, it is used by the mount helper, mount.cifs. When mount.cifs is installed it opens and reads the credential file specified in order to obtain the userid and password arguments which are passed to the cifs vfs. - guest - Although ignored by the kernel component, the mount.cifs mount helper will not prompt the user for a password if guest is specified on the mount options. If no password is specified a null password will be used. -, but it may be useful with non CIFS Unix Extension mounts for cases in which the default mode is specified on the mount but is not to be enforced on the client (e.g. perhaps when MultiUserMount is enabled) Note that this does not affect the normal ACL check on the target machine done by the server software (of the server ACL against the user name provided at mount time). - serverino - Use server's inode numbers instead of generating automatically incrementing inode numbers on the client. Although this will make it easier to spot hardlinked files (as they will have the same inode numbers) and inode numbers may be persistent, note that the server does not guarantee that the inode numbers are unique if multiple server side mounts are exported under a single share (since inode numbers on the servers might not be unique if multiple filesystems are mounted under the same. This is now the default if server supports the required network operation. - noserverino - Client generates inode numbers (rather than using the actual one from the server). These inode numbers will vary after unmount or reboot which can confuse some applications, but not all server filesystems support unique inode numbers. -. - netbiosname - When mounting to servers via port 139, specifies the RFC1001 source name to use to represent the client netbios machine name when doing the RFC1001 netbios session initialize. - direct - Do not do inode data caching on files opened on this mount. This precludes mmapping. - strictcache - Use for switching on strict cache mode. In this mode the client read from the cache all the time it has Oplock Level II, otherwise - read from the server. All written data are stored in the cache, but if the client doesn't have Exclusive Oplock, it writes the data to the server. - rwpidforward - Forward pid of a process who opened a file to any read or write operation on that file. This prevent applications like WINE from failing on read and write if we use mandatory brlock style. - acl - Allow setfacl and getfacl to manage posix ACLs if server supports them. (default) - noacl - Do not allow setfacl and getfacl calls on this mount - user_xattr - Allow getting and setting user xattrs (those attributes whose name begins with user.or os2.) as OS/2 EAs (extended attributes) to the server. This allows support of the setfattr and getfattr utilities. (default) - nouser_xattr - Do not allow getfattr/setfattr to get/set/list xattrs - mapchars - Translate six of the seven reserved characters (not backslash):*?<>|:). - nocase - Request case insensitive path name matching (case sensitive is the default if the server supports it). (mount option ignorecaseis identical to nocase) - posixpaths - If CIFS Unix extensions are supported, attempt to negotiate posix path name support which allows certain characters forbidden in typical CIFS filenames, without requiring remapping. (default) - noposixpaths - If CIFS Unix extensions are supported, do not request posix path name support (this may cause servers to reject creatingfile with certain reserved characters). - nounix - Disable the CIFS Unix Extensions for this mount (tree connection). This is rarely needed, but it may be useful in order to turn off multiple settings all at once (ie posix acls, posix locks, posix paths, symlink support and retrieving uids/gids/mode from the server) or to work around a bug in server which implement the Unix Extensions. - nobrl - Do not send byte range lock requests to the server. This is necessary for certain applications that break with cifs style mandatory byte range locks (and most cifs servers do not yet support requesting advisory byte range locks). - forcemandatorylock - Even if the server supports posix (advisory) byte range locking, send only mandatory lock requests. For some (presumably rare) applications, originally coded for DOS/Windows, which require Windows style mandatory byte range locking, they may be able to take advantage of this option, forcing the cifs client to only send mandatory locks even if the cifs server would support posix advisory locks. forcemandis accepted as a shorter form of this mount option. - nostrictsync - If this mount option is set, when an application does an fsync call then the cifs client does not send an SMB Flush to the server (to force the server to write all dirty data for this file immediately to disk), although cifs still sends all dirty (cached) file data to the server and waits for the server to respond to the write. Since SMB Flush can be very slow, and some servers may be reliable enough (to risk delaying slightly flushing the data to disk on the server), turning on this option may be useful to improve performance for applications that fsync too much, at a small risk of server crash. If this mount option is not set, by default cifs will send an SMB flush request (and wait for a response) on every fsync call. - nodfs - Disable DFS (global name space support) even if the server claims to support it. This can help work around a problem with parsing of DFS paths with Samba server versions 3.0.24 and 3.0.25. - remount - remount the share (often used to change from ro to rw mounts or vice versa) - cifsacl - Report mode bits (e.g. on stat) based on the Windows ACL for the file. (EXPERIMENTAL) - servern - Specify the server 's netbios name (RFC1001 name) to use when attempting to setup a session to the server. This is needed for mounting to some older servers (such as OS/2 or Windows 98 and Windows ME) since they do not support a default server name. A server name can be up to 15 characters long and is usually uppercased. - also will be emulated using queries of the security descriptor (ACL). - mfsymlinks - Enable support for Minshall+French symlinks (see This option is ignored when specified together with the 'sfu' option. Minshall+French symlinks are used even if the server supports the CIFS Unix Extensions. - sign - Must use packet signing (helps avoid unwanted data modification by intermediate systems in the route). Note that signing does not work with lanman or plaintext authentication. - seal - Must seal (encrypt) all data on this mounted share before sending on the network. Requires support for Unix Extensions. Note that this differs from the sign mount option in that it causes encryption of data sent over this mounted share but other shares mounted to the same server are unaffected. - locallease - This option is rarely needed. Fcntl F_SETLEASE is used by some applications such as Samba and NFSv4 server to check to see whether a file is cacheable. CIFS has no way to explicitly request a lease, but can check whether a file is cacheable (oplocked). Unfortunately, even if a file is not oplocked, it could still be cacheable (ie cifs client could grant fcntl leases if no other local processes are using the file) for cases for example such as when the server does not support oplocks and the user is sure that the only updates to the file will be from this client. Specifying this mount option will allow the cifs client to check for leases (only) locally for files which are not oplocked instead of denying leases in that case. (EXPERIMENTAL) - sec - Security mode. Allowed values are: - none - attempt to connection as a null user (no name) - krb5 - Use Kerberos version 5 authentication - krb5i - Use Kerberos authentication and packet signing - ntlm - Use NTLM password hashing (default) - ntlmi - Use NTLM password hashing with signing (if /proc/fs/cifs/PacketSigningEnabled on or if server requires signing also can be the default) - ntlmv2 - Use NTLMv2 password hashing - ntlmv2i - Use NTLMv2 password hashing with packet signing - lanman - (if configured in kernel config) use older lanman hash - hard - Retry file operations if server is not responding - soft - Limit retries to unresponsive servers (usually only one retry) before returning an error. (default) The mount.cifs mount helper also accepts a few mount options before -o including: With most 2.6 kernel versions of modutils, the version of the cifs kernel module can be displayed via modinfo. Misc /proc/fs/cifs Flags and Debug Info¶ Informational pseudo-files: Configuration pseudo-files: These experimental features and tracing can be enabled by changing flags in /proc/fs/cifs (after the cifs module has been installed or built into the kernel, e.g. insmod cifs). To enable a feature set it to 1 e.g. to enable tracing to the kernel message log type: echo 7 > /proc/fs/cifs/cifsFYI cifsFYI functions as a bit mask. Setting it to 1 enables additional kernel logging of various informational messages. 2 enables logging of non-zero SMB return codes while 4 enables logging of requests that take longer than one second to complete (except for byte range lock requests). Setting it to 4 requires CONFIG_CIFS_STATS2 to be set in kernel configuration (.config). Setting it to seven enables all three. Finally, tracing the start of smb requests and responses can be enabled via: echo 1 > /proc/fs/cifs/traceSMB Per share (per client mount) statistics are available in /proc/fs/cifs/Stats. Additional information is available if CONFIG_CIFS_STATS2 is enabled in the kernel configuration (.config). The statistics returned include counters which represent the number of attempted and failed (ie non-zero return code from the server) SMB3 (or cifs) requests grouped by request type (read, write, close etc.). Also recorded is the total bytes read and bytes written to the server for that share. Note that due to client caching effects this can be less than the number of bytes read and written by the application running on the client. Statistics can be reset to zero by echo 0 > /proc/fs/cifs/Stats which may be useful if comparing performance of two different scenarios. Also note that cat /proc/fs/cifs/DebugData will display information about the active sessions and the shares that are mounted. Enabling Kerberos (extended security) works but requires version 1.2 or later of the helper program cifs.upcall to be present and to be configured in the /etc/request-key.conf file. The cifs.upcall helper program is from the Samba project( NTLM and NTLMv2 and LANMAN support do not require this helper. Note that NTLMv2 security (which does not require the cifs.upcall helper program), instead of using Kerberos, is sufficient for some use cases. DFS support allows transparent redirection to shares in an MS-DFS name space. In addition, DFS support for target shares which are specified as UNC names which begin with host names (rather than IP addresses) requires a user space helper (such as cifs.upcall) to be present in order to translate host names to ip address, and the user space helper must also be configured in the file /etc/request-key.conf. Samba, Windows servers and many NAS appliances support DFS as a way of constructing a global name space to ease network configuration and improve reliability. To use cifs Kerberos and DFS support, the Linux keyutils package should be installed and something like the following lines should be added to the /etc/request-key.conf file: create cifs.spnego * * /usr/local/sbin/cifs.upcall %k create dns_resolver * * /usr/local/sbin/cifs.upcall %k
https://doc.kusakata.com/admin-guide/cifs/usage.html
CC-MAIN-2022-21
refinedweb
4,588
61.97
This tutorial is all about SoapUI properties. In last SoapUI tutorial we saw how to add properties in Groovy script. A property in SoapUI is similar to a variable/ parameter and in this tutorial will talk about how to use one in a service request and assign response value to it through scripting. Later, we will move on to property transfer test step and then importing properties. This is the 8th tutorial in our SoapUI online training series. What you will learn from this SoapUI Tutorial? - Different Faces of Properties - Integrating Properties in Service Request - Understanding Property Transfer Teststep - Load Properties Externally There are two types of properties in SoapUI: - Default Properties: included in the SoapUI installation. We can edit some of the default properties but not all. - Custom/user-defined properties: These are defined by us at any level needed, such as global, project, test suite, test case or test step. Most often, properties are used to store and retrieve the data while executing the test cases. Internally property will store the value in key pair format. For example, in the below statement, “Local_Property_FromCurrency” is a key name and “USD” refers value. To access property value, we need to use property name. testRunner.testCase.testSteps[“Properties”].setPropertyValue ( “Local_Property_FromCurrency“, ‘USD') What You Will Learn: Various property levels in SoapUI Pro Let us discuss the various property levels in SoapUI Pro. In SoapUI there three levels of properties available. Level #1. Project and Custom Properties In this level, properties are divided into two sections. They are project properties and custom properties. These will appear at the bottom of the navigator panel when we click on the Project name. Project properties section has default properties which are created during the project creation for example, Name, Description, File etc. In order to create our own properties, we can use custom properties tab. Click on the plus icon to create properties: There are many other options available such as remove, move up, move down and sorting next to add. Any number of custom properties can be added and used by any sections(test suite, test cases) within the project. Level #2. Test Suite and Custom Properties These properties are visible only under the test suite. A test suite can contain any number of properties and they can be accessed from any test steps which belong to the said test suite. Test suite properties appear when click on the respective test suite name under the project. To add custom properties as needed, click on custom properties tab and click on the ‘+’ sign under it. Property #3. Test Case and Custom Properties Test case properties are accessible within the test case. They are not accessible by other test case steps or even test suite under the project. More details about properties with examples Properties can store end points, login details, header information, and domain etc. even though we have discussed about writing and reading data to/from the properties, we are yet to discuss this topic in details with example. The above discussed levels of properties are used in scripting to read the data. #1. Reading properties: We will look at how we can read properties in groovy script. In order to access different level properties, the following is the syntax: Project: Syntax: ${# Project Name # Value } Example: def projectPro = testRunner.testCase.testSuite.project.getPropertyValue ( “Project_Level_Property” ) “Project_Level_Property” ) log.info (projectPro) Test suite: Syntax: ${# TestSuite # Value } Example: def testPro = testRunner.testCase.testSuite.getPropertyValue(‘Testsuite_Property') log.info (testPro) Test case: Syntax: $ {# TestCase # Value } Example: def testcasePro = testRunner.testCase.getPropertyValue(‘Testcase_Property') log.info (testcasePro) Refer the screenshot below: #2. Writing to properties: To do this, we have to use setPropertyValue method. Syntax: setPropertyValue (“property name”,”value”) If we assign values to unknown properties, then SoapUI will create these properties newly. For existing properties will receive the values during the assignment. #3. Removing Properties through Script: This can be done by right-clicking on the property name from the property panel. Then click on the Remove option from the context menu. To do this using script for removing the custom properties use the following statements for project, test suite or test case levels respectively: testRunner.testCase.testSuite.project.removeProperty( “Testcase_Property” ); testRunner.testCase.testSuite.removeProperty( “Testcase_Property” ); testRunner.testCase.removeProperty( “Testcase_Property” ); The above scripts are not optimum when we have multiple properties in each level as these steps have to be repeated several times for each property. An alternative is to iterate the properties through the script as below: testRunner.testCase.properties.each { key,value -> testRunner.testCase.removeProperty(key) } The above script will iterate till the last property available under the test case. “Key” refers the name of the property where as “value” denotes actual value of the property. We can modify the above script to remove the bulk property list present in various levels. #4. Add property: AddProperty method is used for this whose syntax is: addProperty ( property name ); This can be adapted for each level as below: testRunner.testCase.testSuite.project.addProperty(‘ProjectProperty1') testRunner.testCase.testSuite.addProperty(‘TestsuiteProperty1') testRunner.testCase.addProperty(‘TestcaseProperty1') After executing the above scripts, click on the project/test suite/test case name. Check the custom properties tab in property panel and the created property appears here. See below for reference: Using properties in services In this section, we will learn how we can use the properties in services and we are going to use the above scripts for adding, assigning, retrieving property data with currency converter web service. Integrating Properties in Service: Let us start adding test steps as shown in the below screenshot. In the above screenshot, AddProperties_Script test step contains the following script which adds two properties such as Property_FromCurrency and Property_ToCurrency. // Add Properties testRunner.testCase.addProperty(‘Property_FromCurrency') testRunner.testCase.addProperty(‘Property_ToCurrency') // Assign values to the Properties testRunner.testCase.setPropertyValue(‘Property_FromCurrency','USD') testRunner.testCase.setPropertyValue(‘Property_ToCurrency','INR') In the ServiceRequest_CurrencyConverter_1 contains the request with input parameters as seen below: Assigned values in the properties will be transferred to these parameters during the execution. Following this test step, GetResponseData_Script test step has the script that will get the response value and show the result in the log. Here’s the script. // Get Response data from the service def response = context.expand( ‘${ServiceRequest_Currency Converter_1#Response}' ) def parsedResponse = new XmlSlurper().parseText(response) String convertedValue = parsedResponse.Body.ConversionRateResponse. ConversionRateResult.text() log.info(convertedValue) Once all the steps are ready, double click on the test suite name and run the test suite. Then, double click on the ServiceRequest_CurrencyConverter_1 and see the response section. This is what we would find: - Response will be received - Open the script log to see the resultant data that is converted based on input parameters This is how we can pass the parameters to the input request and get the response through the script using properties. Going further we can also pass the response value to another service as input. Property Transfer The property transfer test step transfers the property data from one property to another during the execution. Let us see in brief how we can create property transfer test step and how the property value is transferred between two properties. - Right click on the test case name under the test suite - Click Add Step and then click Properties option from context menu - Repeat the above steps to create the second property. See the below screenshot: - Now we have to add property transfer test step. - Right click on the test case name and click property transfer option from the context menu - Enter your desired property transfer name and then click OK - Click Add i.e. plus sign in the property transfer tool bar - Specify the transfer name and then click OK button - In the right side panel, there are two sections available: Source and Target. Choose the source as Properties and property as Property_Zipcode. Do the same in the target section. Choose Target_Property from the property drop down. When run icon, the property value will be transferred from Property_Zipcode to Target_Property. (Click on image for enlarged view) See the transferred value as shown in the below screenshot. Note: Source property should contain the default value. In addition to this, there are many options available in the property transfer screen. - Fail Transfer on Error - Transfer Text Content - Transfer to All - Entitize Transferred Value(s) - Set Null on Missing Source - Ignore Empty/Missing Values - Use XQuery - Transfer Child Nodes Load Properties from External Source: To load properties from an external source, follow these steps. - Add Properties test step under the test case - Enter the property step name and then click OK - In the property panel under the navigation panel, click Custom Properties tab - Click icon to load the properties from the external property file Note: Property file should be saved or present on your computer. To save the properties click icon. Then, go to the respective drive and pick the property as shown below: On OK, we can see the loaded properties and their values in the Custom Properties tab. Conclusion Well, that’s properties for us! Each level properties have their own characteristics. During your SoapUI practice, try to include properties whenever possible with the groovy script test step for adding, removing, assigning and retrieving property data. This is not only useful when you practice with the services but also critical for real application testing as this technique is very helpful to assert your test cases. Transferring properties among test steps is easier than writing repeated scripts to create new. SoapUI also gives the wonderful feature to import and export properties. This feature will be useful when we are using common properties such as login details, session details etc., for multiple projects. That way, we don’t have to create the same properties again and again for multiple projects. We can simply change in the property value against the properties based on the projects. Next tutorial #9: In next SoapUI tutorial we will learn Conditional Statements in Groovy like: - Boolean Statements - Iteration Statements - Arrays in Groovy That’s it for today. Keep reading and we will see you in the next tutorial. Please share your questions, comments and experiences below. 16 thoughts on “Working with SoapUI Properties – SoapUI Tutorial #8” hi didnot get notification of 7th tutorial? Hi, Can you please explain how we can save/populate Soap/Rest Response values in Properties? @Sunil: You can save the response values by clicking on the required request and choosing the “Dump file” location in its properties. Thanks! Hi, I am fresher currently working as a QA (software tester) in a small company. I am scared about my career growth . i want to ask what is the career growth as QA. What should i do to enhance my skills and in which city i should move so that i can get a good package. Thanks Please reply ASAP whats the difference between context and testrunner ? are they exchangeable ? i tried to set testcase prooperty and use them inside request like ${from} but this approach was only working for global properties and not for testcase/testsuite properties by just putting ${property_name} how system will understand which property we are talking about PathLanguage is also shown in property transfer screen apart from Source/Property ( i am using SOAP NG version), by default the value is ? Can you elaborate more on the last part,how in a file we can place properties and load them(external source) ? Also if you can explain the code of getting response in a little detail ‘// Get Response data from the service” ? what if there are 2 responses from API, as of now we had only 1 amount returned when we can directly set a property as explained in earlier tutorial, then adding them first ? is there a special need to add them ? Hi, I have given the location in DUMP FILE option in Request Properties. But my file is not generated at the location i.e. response is not getting saved. Please help! I just want to know, I transfer the value to the property, then, how can I access it in another request, not using any code, just want the value. I try the to set the “Token” value as #{#CaseName#Token}, failed. As I know in Jmeter, I can set a value to a property, for example, “Token”. After that, anywhere, we can get the value of the “Token” by using “$Token”, but I can’t find this way works in SoapUI, now the problem is that, I can ge the value , and set it to a property, then, I don’t know how to access it. Detailed tutorial… kindly give the tutorial for soapui open source version not for pro..open source only by most peoples.. i can say u r giving the generic example here but focus on open source. its my request thanks sathya How can we save any data from response XML to property file. Hi, I am using Excel DataSink for extracting some properties from SOAP response… it is only saving latest iteration results…Is there any way not to replace the previous run results. Just to add..while adding property from external file we have to give syntax as below Property_Zipcode=5020 Could you please provide the end to end simple web service test with groovy. Hi, I need to read data from excel sheet and save it in the SoapUI properties. Please help me asap. Also send me the detailed steps how to do. How do I substitute a global property in request body. For eg my HTTP request body is of type application/json and the body is as follows : { “name”: “first name”, “groupId”: “${groupId}” } Where groupId is a Global Custom Property. I would like to change the Dump File location for every response using automation. How to achieve that?. Ex: For response 1, Dump File name is filename1 then for next response DumpFile should be filename2. Please help me how to implement it
https://www.softwaretestinghelp.com/soapui-tutorial-8-working-with-soapui-properties/
CC-MAIN-2019-47
refinedweb
2,319
54.93
The iostream library is fairly complex -- so we will not be able to cover it in its entirety in these tutorials. However, we will show you the most commonly used functionality. In this section, we will look at various aspects of the input class (istream). Note: All of the I/O functionality in this lesson lives in the std namespace. That means all I/O objects and functions either have to be prefixed with “std::”, or the “using namespace std;” statement has to be used. extensively is nor ‘\n’. So here’s what happens: the first time you hit cin.get(ch), it waits for you to enter input, so you enter a string, like “abcd\n”. These then get extracted one by one, including the ‘: Hello! The program will print: Hello!. ==================================================================== I don’t seem to understand. What newline character is cin reading? When you type “Hello!” and then hit enter, you’re actually entering “Hello!\n” as input (the ‘: There is a typo: "The **insertion** (should be extraction) operator skips the spaces and the newline." Thanks, fixed! int main() { char ch; while (cin >> ch) cout << ch; return 0; } How to stop this loop? I inputted 0,’\0′ ‘ ‘. 😐 ‘ch’ being ‘char’
http://www.learncpp.com/cpp-tutorial/182-input-with-istream/
CC-MAIN-2017-30
refinedweb
202
77.53
Ticket #34 (assigned defect) Making it easier to use libyaml in PyYAML Description I really like PyYAML with the libyaml "plugin". I've been using it a lot without any troubles. So hopefully it will make it out of alpha soon :-). I have a couple of suggestions with respect to seamlessness between PyYAML in Python mode and PyYAML in C mode. - yaml.load's default Loader. I would suggest that this becomes CLoader when it is available (i.e., when yaml.cyaml imports successfully). - yaml.dump's and yaml.safe_dump's default Dumpers. Ditto. - Loader vs. CLoader, Dumper vs. CDumper, Emitter vs. CEmitter, etc. I would like a uniform way (without using if's) to subclass from the C version of these classes when they are available, and otherwise the Python versions of these classes. One possible solution: - Python Loader etc. classes get renamed to PyLoader, PythonLoader, PLoader, or something along those lines. - Loader becomes an alias to CLoader or PyLoader, whichever is the best available. - It could also be possible to control the aliases by a function call, e.g., to force them to the Python version. Perhaps: def set_language(lang): """default supported languages: 'C', 'Python'""" import yaml global Loader, Dumper, Emitter, ... Loader = yaml.__dict__[lang + 'Loader'] Dumper = yaml.__dict__[lang + 'Dumper'] Emitter = yaml.__dict__[lang + 'Emitter'] ... try: set_language('C') except KeyError: set_language('Py') Currently all of my code that imports yaml starts like this: import yaml if hasattr (yaml, 'CLoader'): Loader = yaml.CLoader else: Loader = yaml.Loader if hasattr (yaml, 'CDumper'): Dumper = yaml.CDumper SafeDumper = yaml.CSafeDumper else: Dumper = yaml.Dumper SafeDumper = yaml.SafeDumper def yaml_load (x): return yaml.load (x, Loader = Loader) def yaml_dump (x): return yaml.dump (x, Dumper = Dumper) def yaml_safe_dump (x): return yaml.dump (x, Dumper = SafeDumper) Then I use yaml_load in place of yaml.load, yaml_dump in place of yaml.dump, and I use Loader etc. whenever I want to subclass. Needless to say, it's annoying to have to repeat this in all of my projects. Another motivation for all this, I think, is that it's not obvious out-of-the-box how to use CLoader and friends. I don't think the average user should have to know about this distinction. If libyaml is installed, it should just work (faster). A more minor point: Once out of alpha, could we replace setup.py and setup_with_libyaml.py with a single setup.py that does the right thing depending on what's available in the environment? (If PyRex? is installed, I don't see why we shouldn't install, unless a command-line option tells us not to. Or even if RyRex? isn't installed, as suggested in anotehr ticket.) Change History comment:2 Changed 8 years ago by xi The updated setup.py detects if libyaml is installed and builds the bindings automatically. comment:3 follow-up: ↓ 4 Changed 7 years ago by Laurens. comment:4 in reply to: ↑ 3 Changed 6 years ago by anonymous. +1 comment:5 Changed 4 years ago by anonymous What's the reason to not use libyaml if it's available? It's such a simple change. comment 23 months ago by Richardmn Voet was taken into adderall for studying blood. [ dextroamphetamine sulfate - Sufficient concern can become former and possible, not in compounding to primary public economies like average ad 5 adderall 5 mg white dozens or excerpts. comment:8 Changed 23 months ago by Richardmn Acc is described as a adderall online no script in the deprivation where the 200 million fathers that make the environment cannabis are over also weak, or again gone. [ adderall without prescription - Although students with asperger act acquire adderall online no script ambulances without 12-weeks systemic disease and their development extensively lacks autobiographical effects, ballroom omigapil and treatment is below young. comment:9 Changed 23 months ago by Richardmn Stevens war neutralität an der schlechte seienden school und später an der cambridge meter. Gegenentwurf und hephaistion werden zuletzt nicht ebenso besuchte, denn meist besonders, mindestens mondrian, werden wir in existierende mensch stimmen. comment:10 Changed 23 months ago by Richardmn Popular volume oil is solely first to producing radiation as a adult, and it is obtained from the horn game of sack firearms. Later severely, monk and natalie find the cerebral diseases caused by obesity nhs of a marketing, stork murray, in a speed. comment:11 Changed 23 months ago by RichardKew Simpson's career pica, newlyweds: nick and jessica. Not, tampico hand is unusually being used in the infamous use season. comment:12 Changed 23 months ago by RichardKew There are a age of confounding friends to consider, including offline to real hormones, industry, unit tension, and safety humin. Research needs to go into the power the clod is sometimes trying to sell. comment:13 Changed 23 months ago by RichardKew There are a training of outwards to the area with supermassive therapy versus an substantial health. At story, studies turn into sleeping steps with time classes and many securities. comment:14 Changed 23 months ago by RichardKew Designing the pulp resurrection has become a russian finger in which chapelon, giesl and porta were shared plants, and it was potentially many for gravitational lotteries in additional reputation and a important pain in fight separation and machine. Impact has popular surfaces, including distal tests, feathers, risk, and steel. comment:15 Changed 22 months ago by Richardmn If the spiral and drink of the peppered year is one of the most immediately impacting and even understood experts of oxidative work in boy, it should be taught. His episode could be best summarised as being few and german very than diagnosible. comment:16 Changed 22 months ago by FrancisOi Mixed-sex minutes hormones act unexpectedly; effects drift initially during large complete groups, but experience an adderall 5mg in advice starting at antisocial realization. Interpol issued an subtle reeler stress for van der sloot, believing that he had fled the spirit to chile and may have been traveling to argentina to return to aruba. comment:17 Changed 22 months ago by FrancisRib Acids, men, close regions, and engine were less than discrete much until the strip shops, although with some 'inspired frigatebirds. Constant variables for impairment variations include sanctuaries and contracts.
http://pyyaml.org/ticket/34
CC-MAIN-2016-22
refinedweb
1,040
57.77
When an application is designed in a standard way on the basis of fixed elements, then users can look at such program only through the eyes of its developer and go only in his steps. When all the screen elements are movable and resizable, then an application turns into a very powerful instrument. Users can do all the things that were coded by the developer but users can do a lot more and each user can do all these things in the way he personally want them to be done. Users can work with such application differently from the developer’s ideas; users can deal with the same task differently from each other, and even the same user can change at any moment the view of an application and work with it in many different ways. Don’t forget that I am talking about already finished application which was distributed among the users. All those mentioned variations do not need any developer’s participation in the process of further changing; this is simply the main idea and the power of user-driven applications. To get all these possibilities, only one small change in the programs is needed: all the screen elements must be easily movable and resizable BY USER at any moment while an application is working! Making any element movable and resizable becomes the basis of design. You are looking at the third and also the last part of this article in which I continue to demonstrate and explain how some graphical primitives can be turned into movable and resizable. The set of examples for this article is only a small subset of examples from the book World of Movable Objects. Visually the examples accompanying this part of the article can look identical to what you see in the book but all these examples (with one exception) are significantly changed. The reason that there is one exception is simple: after all my attempts I have found out that this example was initially designed in such a way that I can’t improve it any more. The main changes of the examples demonstrated in this part of the article are made in the covers of the elements and, as the result of such changes, some methods of their classes are changed. These are mostly the methods that are called at the starting moment of resizing. Nearly all the elements demonstrated in this part have curved borders so originally they were designed with the N-node covers while their new covers contain significantly lesser number of nodes. Also I began to use more often the technique of adhered mouse and you will see it in all further examples. I would recommend to compare in each case the new version of the cover with the old one which can be seen in similar class from the book. The book with its accompanying project and several other very helpful documents and projects can be downloaded from the Such comparison of covers demonstrates not only the evolution of my views on cover design but also underlines that in each case there can be variants in design. From the users’ points of view, there is no difference in behaviour of elements with the old or the new version of cover so for users these improvements on developer’s kitchen do not matter at all. But I write this article for developers and any programmer may prefer to use one version or another. Throughout all years of my work I especially liked the situations when there were more than one possible solution; this gives developers the freedom of creation. Some of the examples from this part can remind you the examples demonstrated in the previous parts of the article. Though I have divided the whole work into three parts, it is a single article and the division was made for the easiness of publication and the clarity of explanation. The N-node covers are first demonstrated in the book with three types of elements: circles, rings, and rounded strips. In the second part of this article I explained the N-node covers for circles and rings and also demonstrated variants of significantly simplified covers for these elements. This simplification was based on a switch from the N-node covers to the ordinary covers consisting of only few nodes. Now let us do the same with the rounded strips. The variant of the rounded strip with the N-node cover can be found in the book; here is the new simplified version. Any object of the RoundedStrip class (figure 1) is defined by two points (ptC0 and ptC1) – centers of the circles – and the radius of those circles (m_radius). There exists the minimal allowed distance between the centers of two circles –minLength. The distance between two centers is equal to the length of the straight part of the border so this restriction guarantees the existence of a straight part of border between two semicircles and thus prevents from squeezing a rounded strip into a single circle. There is also a limit on the minimal allowed radius of the circles (minRadius) so that no rounded strip can be turned into a line. Visually two rounded ends of the strip are indistinguishable from each other and for users there is no difference between any rounded strip and the same strip rotated for 180 degrees (rotation goes around the central point of an object), but dealing with such object at any moment requires the knowledge of the angle (m_angle) and it is always calculated as an angle of the line from ptC0 to ptC1. RoundedStrip ptC0 ptC1 m_radius minLength minRadius m_angle public class RoundedStrip : GraphicalObject { PointF ptC0, ptC1; // centers of semicircles double m_angle; // from ptC0 to ptC1 float m_radius; SolidBrush m_brush; static int minRadius = 12; static int minLength = 20; // straight part = distance (ptC0, ptC1) int delta = 3; For forward moving and rotation of the rounded strips it would be enough to have a cover consisting of a single node as rounded strip is one of the shapes of the nodes. To organize the resizing of a rounded strip by any border point some nodes must be added to the cover. Straight and curved parts of the border are used to start different types of resizing. The curved parts are used to change the length of a strip and when any point of the curved border is pressed then the mouse movement is allowed only along the main axis which goes through the centers of the circles. Such movement changes the distance between these two centers but it does not affect the width of the strip. When you try to move the straight part of the border, it moves only to or from the opposite side and the width of the strip is changed. Such movement does not affect the length of the straight part but the curves have to connect opposite straight borders so with the change of the distance between two straight borders the radius of the curves must also change and, as the result, the total length of the strip also changes. In the second part of this article we used the circles with simple covers – the Circle_SimpleCover class. Cover for this class consists of two nodes, so we are familiar with the system of two circular coaxial nodes to organize the circle’s resizing. In case of a rounded strip, we have semicircles and exactly the same idea of resizing is used for such strips. There are five nodes in the cover of the RoundedStrip objects. The nodes overlap so all the needed movements are provided by organizing the right order of nodes in the cover. In the code below the special points of the border – the end points of the straight parts – are calculated by the RoundedStrip.CornerPoints() method. Circle_SimpleCover RoundedStrip.CornerPoints() // order of nodes: // [0, 1] strips on the straight sides // [2] inner big strip // [3, 4] circles // public override void DefineCover () { PointF [] pts = CornerPoints (); CoverNode [] nodes = new CoverNode [] { new CoverNode (0, pts [0], pts [1], delta), new CoverNode (1, pts [2], pts [3], delta), new CoverNode (2, ptC0, ptC1, m_radius - delta, Cursors .SizeAll), new CoverNode (3, ptC0, m_radius + delta), new CoverNode (4, ptC1, m_radius + delta) }; cover = new Cover (nodes); cover .SetClearance (false); } Four nodes of five in this cover provide the resizing. Though two of these nodes are big enough, they are mostly covered by another preceding node so only their narrow part along the border is visible (available) to the mover. Thus, along the whole length of border we receive a narrow strip that can be used for resizing; this narrow strip is combined of four nodes On pressing any of these nodes, the cursor is placed exactly on the nearest point of the border line and the straight segment for further movement of the cursor is determined. One end of this segment is calculated on the base of the minimal size limits while the other end is declared to be somewhere far away. public void StartResizing (Point ptMouse, int iNode) { cornersBeforeResizing = CornerPoints (); PointF ptBase, ptCursor; double dist; switch (iNode) { case 0: // straight border Auxi_Geometry .Distance_PointLine (ptMouse, cornersBeforeResizing[2], cornersBeforeResizing[3], out ptBase); angleBeam = m_angle + Math .PI / 2; ptInnerLimit = Auxi_Geometry .PointToPoint (ptBase, angleBeam, 2 * minRadius); ptOuterLimit =Auxi_Geometry.PointToPoint (ptBase, angleBeam, 4000); ptCursor = Auxi_Geometry .PointToPoint (ptBase, angleBeam, 2 * m_radius); break; … … case 4: // semicircular border around the ptC1 ptCursor = Auxi_Geometry .PointToPoint (ptC1, Auxi_Geometry .Line_Angle (ptC1, ptMouse), m_radius); dist = Auxi_Geometry .Distance_PointLine (ptCursor, cornersBeforeResizing[1], cornersBeforeResizing[2], out ptBase); additionToLength = dist - Auxi_Geometry .Distance (ptC0, ptC1); angleBeam = m_angle; ptInnerLimit = Auxi_Geometry .PointToPoint (ptBase, angleBeam, MinimumLength + additionToLength); ptOuterLimit =Auxi_Geometry.PointToPoint (ptBase, angleBeam, 4000); break; } Cursor .Position = form .PointToScreen (Point .Round (ptCursor)); } Until the moment of the mouse release, the cursor can be moved only along the determined straight segment between two points (ptInnerLimit and ptOuterLimit) and the current mouse position at any moment is considered as the new position of the caught border. It is a classical example of using the technique of adhered mouse. ptInnerLimit ptOuterLimit public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { bool bRet = false; double dist; if (catcher == MouseButtons .Left) { if (i != 2) { PointF ptBase, ptOnBeam; PointOfSegment typeOfNearest; Auxi_Geometry .Distance_PointSegment (ptM, ptInnerLimit, ptOuterLimit, out ptBase, out typeOfNearest, out ptOnBeam); if (i == 0) { dist = Auxi_Geometry .Distance_PointLine (ptOnBeam, cornersBeforeResizing [2], cornersBeforeResizing [3]); ptC0 = Auxi_Geometry .PointToPoint (cornersBeforeResizing [2], m_angle + Math .PI / 2, dist / 2); ptC1 = Auxi_Geometry .PointToPoint (cornersBeforeResizing [3], m_angle + Math .PI / 2, dist / 2); Radius = Convert .ToSingle (dist / 2); } … … else if (i == 4) { dist = Auxi_Geometry .Distance_PointLine (ptOnBeam, cornersBeforeResizing [1], cornersBeforeResizing [2]); ptC1 = Auxi_Geometry .PointToPoint (ptC0, m_angle, dist - additionToLength); DefineCover (); } Cursor .Position = form .PointToScreen (Point .Round (ptOnBeam)); bRet = true; … … Arc is going to be the next demonstrated graphical primitive and I want to show not one but two examples with the arcs because the covers are slightly different for thin and wide objects of such type. There is no strict definition for thin and wide arcs so I use my own simple rule to divide them: thin arcs are painted with a pen while for wide arcs some brush is used. Certainly, it is a very artificial division and you might prefer something different. Anyway, I want to demonstrate two possible variants and you make the decision about the preferable cover for similar objects in your programs. From the point of geometry, any thin arc is determined by four parameters: central point (m_center), radius of the circle (m_radius), angle to one end point (angleStart), and the sweep angle from this end point to another (angleSweep). Angles are used in nearly all the examples of this part and I want to remind that in all my programs the angles are used in the standard math way so positive angles are going counterclockwise. m_center angleStart angleSweep public class Arc_Thin : GraphicalObject { PointF m_center; float m_radius; double angleStart; double angleSweep; Pen m_pen; double minSweep_Degree = 10; double maxSweep_Degree = 350; static float minRadius = 20; There is also a pen for drawing an arc (m_pen) and three restrictions on the sizes; these restrictions are determined by the set of allowed changes. The sweep angle of an arc can be changed by moving the end points. There are two restrictions on the sweep angle; one of them prevents from turning an arc into a point and there is a minimal sweep angle (minSweep_Degree = 10) while another does not allow to transform an arc into a closed circle and there is maximum sweep angle (maxSweep_Degree = 350). It is a standard practice to provide the forward movement and rotation of an object by any inner point and both movements are organized for the arcs according to this rule. m_pen minSweep_Degree maxSweep_Degree The only problematic thing was the change of the radius. There is no problem with the change itself, but from my point of view there is no obvious place to start such movement. At last I decided to start such resizing at the small area around the middle point of the arc, but there is still the problem of informing users about such possibility. In the similar example from the book you can see that this small area is painted with the wider pen; such change of the width gives a very clear signal that there is something special with this area. Another possibility is to use different color, for example, similar pen but darker. This might be a good solution when you have an arc of several pixels width, but if you use a thin pen of only one or two pixels then the short stretch of darker color does not attract any attention. In any case the mouse cursor over this special area of an arc is different from the cursor over neighboring areas and this change can inform users, but this difference is seen only when the cursor is moved to this area. There is no other signal about the special area around the middle point; Maybe you can think out something else to visualize this small area of resizing. Forward moving of an arc is provided by a pair of coaxial circular nodes with a small (several pixels) difference between their radii. It is like a pair of circular nodes used for the inner border of the rings demonstrated in the second part of this article. Because an arc includes only some part of the circular border, then the remaining part – the gap – must be cut out from the circular node used for moving. This technique was explained in the second part of the article: to cut out some part of the ordinary nodes, other node(s) with the Behaviour.Transparent parameter must be included into the cover. Figure 2 shows that, depending on the angle of the gap, this requires either one or two transparent nodes. When the gap angle is less than 180 degrees, a single transparent convex polygon is used; for bigger gaps there are two transparent rectangles. Behaviour.Transparent It is easier to work with the covers in which the number of nodes is determined at the moment of initialization, does not depend on the sizes or any other transformation of an object, and thus does not change throughout the life of an object. It is easier because from the very beginning the reaction on pressing one or another node depends only on the number of the node. From my point of view, the use of either one or two nodes for cutting the gap in the arc is a minor inconvenience. This inconvenience is so small that I didn’t think about avoiding it. Yet, there is an easy solution for using always two nodes regardless of the sweep angle; you will see this solution in one of the further examples – in the covers for pie slices which are similar to arcs in design and use. Here is the code for the Arc_Thin.DefineCover() method. In order to shorten the code here in the text of an article, I skip the calculations of two transparent nodes for the case of the wide gap. Arc_Thin.DefineCover() // order of nodes: // [0] at one end (on angleStart) // [1] at another end // [2] in the middle of the arc // [3] transparent; to cut out the inner circle // [4] or [4,5] transparent; to cut out the gap // last one outer circle to cover the whole arc // public override void DefineCover () { float rOuter = m_radius + 3; float rInner = m_radius - 3; int nNodes = (Math .Abs (angleSweep) <= Math .PI) ? 7 : 6; CoverNode [] nodes = new CoverNode [nNodes]; nodes [0] = new CoverNode (0, StartPoint, nrSmall); nodes [1] = new CoverNode (1, AnotherEnd, nrSmall); nodes [2] = new CoverNode (2, MiddlePoint, nrForMiddle); nodes [3] = new CoverNode (3, m_center, rInner, Behaviour .Transparent); if (Math .Abs (angleSweep) <= Math .PI) { //nNodes = 7; … … nodes [4] = new CoverNode (4, ptsA, Behaviour .Transparent); nodes [5] = new CoverNode (5, ptsB, Behaviour .Transparent); } else { //nNodes = 6; double sweepGap; if (angleSweep > 0) { sweepGap = angleSweep - 2 * Math .PI; } else { sweepGap = [4] = new CoverNode (4, pts, Behaviour .Transparent); } nodes [nNodes - 1] = new CoverNode (nNodes - 1, m_center, rOuter, Cursors .SizeAll); cover = new Cover (nodes); cover .SetClearance (false); } The resizing is started when any of the first three nodes is pressed. At this moment some calculations and preliminary movement of the cursor must be done, so at this moment the Arc_Thin.StartResizing() method is called. In each of these three cases the cursor is switched to the central point of the pressed node. From this moment and until the mouse release, the changing mouse position is used as the new location of the associated arc point. When the middle point of an arc is pressed (iCaughtNode == 2), then the pressed mouse is allowed to move only along the radial line; until the moment of release the allowed path is shown with an auxiliary thin line. While the radius of an arc is changed, angles to both end points do not change so these points also move along their radial lines. These two radial lines are also painted with a thin auxiliary pen. All three auxiliary lines are painted during the process of resizing and make the change of an arc much more obvious. When one of the end points is pressed, then the allowed path for the mouse is the circle. During the movement of an end point, part of the circle over the gap is also painted with an auxiliary line. Arc_Thin.StartResizing() iCaughtNode The part of the Arc_Thin.StartResizing() method for the case when the node over the middle point of an arc is pressed is really simple and short. public void StartResizing (Point pt, int iCaughtNode) { PointF ptCursor = MiddlePoint; // anything … … else // iCaughtNode == 2 { ptCursor = MiddlePoint; double angleMiddle = angleStart + angleSweep / 2; ptInnerLimit = Auxi_Geometry .PointToPoint (m_center, angleMiddle, minRadius); ptOuterLimit =Auxi_Geometry.PointToPoint (m_center, angleMiddle, 4000); } Cursor .Position = form .PointToScreen (Point .Round (ptCursor)); } For the cases of the pressed node over one or another end point, the code is much longer, so I don’t want to include it here, but I want to give some explanation. (You can find similar explanation, maybe even more detailed, in the book and in one of my previous articles The Roads we Take which was published in February). There is a special problem of allowed (or restricted) movement along the circular line but the existence of this problem depends on the maximum allowed sweep angle. When the maximum allowed sweep angle is not big then the movable point – in our case it is the end point of an arc – can be easily kept within the allowed range. When the maximum allowed arc is very close to the full circle, then you can move the cursor fast enough to jump over the narrow gap. Calculated angle to the mouse signals that the cursor is inside the allowed range but it gives no indication that it happened as the result of closing the circle which is definitely wrong. To avoid such mistakes, the comparison of the current angle to the mouse with the allowed range is not enough; there must be some more sophisticated check. This check can be organized in different ways; here is my solution. I calculate the ranges for the first and the last quarter of the longest allowed arc and the direct jump from one of these quarters to another is prohibited. Wide arc (figure 3) looks like a ring with the cut out sector or several sectors if the sweep angle of the arc is small. Because such arc reminds a ring then, to move its outer and inner borders, I use the same system of four circular nodes as was introduced with the rings in the previous part of this article. To cut out the needed part of the ring in order to transform it into an arc, I use the same one or two transparent nodes that were demonstrated in the previous example with the thin arcs. But this is not all that reminds one or another of the previous examples. I need something to organize the change of the sweep angle and for it I use the same strip nodes on the borders as were shown on the movable partitions of circles and rings in the previous part of this article. Now all the needed parts (nodes) of the cover are declared and the main thing is to include them into the cover in correct order. // order (and number) of nodes: // [0] strip at one end (on angleStart) // [1] strip at another end // [2,3] or [2] transparent; to cut out the gap // 1 circle transparent; to cut out the inner circle // 1 circle to move the inner border // 1 circle to move the object // 1 circle to move the outer border // public override void DefineCover () { int jInside; int nNodes = (Math .Abs (angleSweep) <= Math .PI) ? 8 : 7; CoverNode [] nodes = new CoverNode [nNodes]; nodes [0] = new CoverNode (0, Auxi_Geometry .PointToPoint (m_center, StartAngle, rOuter), Auxi_Geometry .PointToPoint (m_center, StartAngle, rInner), rStrip); nodes [1] = new CoverNode (1, Auxi_Geometry .PointToPoint (m_center, AnotherEndAngle, rOuter), Auxi_Geometry .PointToPoint (m_center, AnotherEndAngle, rInner), rStrip); if (Math .Abs (angleSweep) <= Math .PI) { // nNodes = 8; PointF ptA, ptB; double angleA, angleB; if (angleSweep >= 0) { angleB = angleStart; angleA = angleStart + angleSweep; } else { angleA = angleStart; angleB = angleStart + angleSweep; } ptA = Auxi_Geometry .PointToPoint (m_center, angleA, rOuter); ptB = Auxi_Geometry .PointToPoint (m_center, angleB, rOuter); PointF [] ptsA = new PointF [] {ptA, Auxi_Geometry .PointToPoint (ptA, angleA + Math .PI / 2, rOuter), Auxi_Geometry .PointToPoint (m_center, angleA + 3 * Math .PI / 4, rOuter * Math .Sqrt (2)), Auxi_Geometry.PointToPoint (m_center, angleA - Math .PI, rOuter) }; PointF [] ptsB = new PointF [] {ptB, Auxi_Geometry .PointToPoint (m_center, angleB - Math .PI, rOuter), Auxi_Geometry .PointToPoint (m_center, angleB - 3 * Math .PI / 4, rOuter * Math .Sqrt (2)), Auxi_Geometry.PointToPoint (ptB, angleB - Math .PI / 2, rOuter) }; nodes [2] = new CoverNode (3, ptsA, Behaviour .Transparent); nodes [3] = new CoverNode (4, ptsB, Behaviour .Transparent); jInside = 4; } else { // nNodes = 7; double sweepGap = (angleSweep > 0) ? (angleSweep - 2 * Math .PI) : [2] = new CoverNode (4, pts, Behaviour .Transparent); jInside = 3; } nodes [jInside] = new CoverNode (jInside, m_center, rInner - delta, Behaviour .Transparent); nodes [jInside + 1] = new CoverNode (jInside +1, m_center, rInner + delta); nodes [jInside + 2] = new CoverNode (jInside + 2, m_center, rOuter - delta, Cursors .SizeAll); nodes [jInside + 3] = new CoverNode (jInside +3, m_center, rOuter + delta); cover = new Cover (nodes); cover .SetClearance (false); } Movable outer and inner borders allow to change the width of the arc. By moving one border and then another it is possible to change radii and at the same time keep the same width of the arc. Is it possible to get the same result in some other way? Well, for some time I thought about adding one more node exactly to fulfill this task. This would be a solution similar to the thin arcs only instead of the circular node on the middle point I would place a strip node along the radial line going through this middle point; this strip node would stretch from the inner border of the arc to the outer border. After some considerations I dropped this idea though it is not a problem to add such node. If you do not highlight the area of this node in one way or another then its existence is not obvious for users and the whole use of some secret place looks very artificial. If there is some tip about the existence of this special area then it is much better. If you want, you can add such node, visualize it, for example, with a darker or lighter color, and see how it works. Maybe you will like it. Crescent is not among the most often used graphical elements but from time to time it is needed. We also see it throughout our life; at least those who look from time to time at the night sky. Warning: though the resizing of the crescent demonstrated below looks simple, it cannot be used for changing of natural illumination in your backyard. The design of a crescent is not a problem at all: take a circle and cut out the bigger part of it by another circle of a bigger radius; figure 4 demonstrates the details of such construction. Auxiliary lines on the figure connect two horns and the centers of two circles with those horns. Several of the previous examples already demonstrated the cutting out of some areas so the bigger circular node must be a transparent one and precede another circular node in the cover. Such simple cover of only two nodes will provide the forward movement and rotation of a crescent but what about resizing? I think that two types of resizing are needed: change of the size (distance between the horns) and change of the width. These movements are easily understandable by users and for both of them there are obvious points where such resizing can be started. To change the size, you press one or another horn and its movement is allowed along the line connecting the horns. To change the width, you press the border point in the widest part of an object and then make the crescent either wider or narrower. These four special points are covered by four small circular nodes (see figure 4) and in such a way we get the cover consisting of six circles. By default any circular node has the cursor in the shape of a hand and this fits very well with the idea of the first five nodes. Only for the last node covering the whole crescent we need to change this default value so that different shape will signal the possibility of moving the whole object. public override void DefineCover () { // order of nodes: ptA - ptD - ptB - ptC – // big Transparent circle (inner radius) – // smaller normal cirle (outer radius) CoverNode [] nodes = new CoverNode [] { new CoverNode (0, ptA, 4), new CoverNode (1, ptD, 4), new CoverNode (2, ptB, 4), new CoverNode (3, ptC, 4), new CoverNode (4, ptCenterInner, Convert .ToSingle (rInnerBorder), Behaviour .Transparent), new CoverNode (5, ptCenterOuter, Convert .ToSingle (rOuterBorder), Cursors .SizeAll) }; cover = new Cover (nodes); cover .SetClearance (false); } All nodes of the cover and auxiliary lines that are shown at figure 4 can be seen in the real Form_Crescent.cs when you press the small button to show cover. Letters to mark the special points and the numbers of the nodes (in red color) were artificially added to this figure and are not shown in the working application. I always try to construct the covers in such a way that the resizing can start at any border point; this rule was demonstrated in the previous examples. But you don’t see it in the case of the crescent and the reason is very simple: I don’t understand myself what kind of resizing has to start at other border points. It is very easy to use in case of crescent a classical N-node cover and put a set of small overlapping circular nodes along both inner and outer borders, but how am I going to change the sizes if one of those nodes is pressed? What size of the crescent has to be changed and how this change must correlate with the mouse movement if the resizing starts at any other border point? For those four small nodes that you see at figure 4 I have a clear understanding of further changes if one of those nodes is pressed. When I’ll have the same clear understanding of the needed resizing started at any other border point, I’ll change the cover. If you have such understanding, you can change the cover yourself. Here are several remarks about the currently implemented resizing and the existing restrictions on such resizing. When any of the first four nodes is pressed, the Crescent.StartResizing() method is called. Crescent.StartResizing() private void OnMouseDown (object sender, MouseEventArgs e) { ptMouse_Down = e .Location; if (mover .Catch (e .Location, e .Button)) { if (mover .CaughtSource is Crescent) { if (e .Button == MouseButtons .Left && mover .CaughtNode < 4) { mover .MouseTraced = false; crescent .StartResizing (e .Location, mover .CaughtNode); mover .MouseTraced = true; } … … At this moment the mouse cursor is moved to the central point of the pressed node, which means placing of the mouse exactly on the border. Then two boundary points (ptInner and ptOuter) for the mouse movement are calculated and until the moment of release the mouse cursor can move only along the straight line between these two points. Until the release of the mouse, its position is used as the location of the associated border point, so it is a classical use of the adhered mouse. ptInner ptOuter minHangWidth minWidth public void StartResizing (Point ptMouse, int iNode) { ptHorns_base = Auxi_Geometry .PointToPoint (ptCenterInner, angle, rInnerBorder - wHang); h = Auxi_Geometry .Distance (ptB, ptHorns_base); switch (iNode) { case 0: default: ptInner = Auxi_Geometry .PointToPoint (ptCenterInner, angle, Auxi_Geometry .Distance (ptCenterInner, ptHorns_base) + minHangWidth); ptOuter = Auxi_Geometry .PointToPoint (ptCenterOuter, angle, rOuterBorder - minWidth); break; … … public void StartResizing (Point ptMouse, int iNode) { … … case 1: Cursor .Position = form .PointToScreen (Point .Round (ptD)); ptInner = Auxi_Geometry .PointToPoint (ptCenterInner, angle, rInnerBorder + minWidth); ptOuter = Auxi_Geometry .PointToPoint (ptHorns_base, angle, h); break; … … public void StartResizing (Point ptMouse, int iNode) { … … case 2: Cursor .Position = form .PointToScreen (Point .Round (ptB)); ptInner = Auxi_Geometry .PointToPoint (ptHorns_base, angle + Math .PI / 2, wHang + width); ptOuter = Auxi_Geometry .PointToPoint (ptHorns_base, angle + Math .PI / 2, 4000); break; case 3: Cursor .Position = form .PointToScreen (Point .Round (ptC)); ptInner = Auxi_Geometry .PointToPoint (ptHorns_base, angle - Math .PI / 2, wHang + width); ptOuter = Auxi_Geometry .PointToPoint (ptHorns_base, angle - Math .PI / 2, 4000); break; } } Circles or their parts are often used in the programs and those elements can be used in different ways, so the next two examples deal with circles or their parts. I am sure that these sectors from figure 5 and their covers will remind you the wide arcs from one of the previous examples. Yes, there are similarities in view and cover design, but the allowed range for the sweep angle of these sectors is different and this difference causes some interesting changes in code. The sweep angle of these sectors can be changed between zero and 180 degrees and this is not my whim: indicators with an arrow moving inside the [0, 180] degree range are widely used in real devices and a lot of applications try to use similar screen objects to show the values of one or another parameter. In the real devices, there are always some marks and numbers around the semicircle; these additional marks are very helpful in getting the precise information from the moving arrow. It is not a problem at all to add such additional information around these sectors, but then everything will turn into complex object. I demonstrate a lot of complex objects in the book but I don’t want to turn these simple elements into complex objects; let us keep them really simple and deal only with the movements of these sectors. public class CircleSector : GraphicalObject { PointF m_center; float m_radius; double angleStart; double angleSweep; SolidBrush m_brush; bool bFixedStartingSide = false; Pen penFixedBorder; The CircleSector class contains four standard fields to describe an object of such shape: central point (m_center), radius (m_radius), angle of one side (angleStart), and the sweep angle from this side to another (angleSweep). There is also a standard field for visualizing an object (m_brush), but in addition there are two fields which were not used in the previous similar classes. CircleSector m_brush In the real indicators the position of some value is fixed (usually it is zero value) and the moving arrow goes away from this fixed side and then returns back. In my simulation of the real devices there is a possibility to fix one side – the side of the angleStart – by setting the value of the bFixedStartingSide field. Another additional field – penFixedBorder – is used for visualizing this fixed side. bFixedStartingSide penFixedBorder I deliberately made this pen wide (now it is three pixels wide, but you can change the width) because the use of this wide pen helps to solve one problem which is specific for this example. The sweep angle of any CircleSector object can be diminished to zero in which case this sector simply disappears from view. It is somewhere there on the screen but it is invisible. Not good situation at all, but if you declare the possibility of zero sweep angle then you have to think about such special situation. In the current example the wide line on the starting side is shown when the bFixedStartingSide value is set to true and this is regardless of the sweep angle. Thus, the sector can disappear, but the wide line still shows its location. If you don’t want to fix the starting side, you can switch ON the covers and the cover will show you the place of otherwise invisible sector. Then you can press the node, move one side from another, and in this way make the sector visible again. In the real application where the covers are never shown, you will have to think about something else to show such indicator with zero sweep angle. // order of nodes: [0] strip on angleStart + angleSweep // [1] strip on angleStart (!) // [2, 3] Transparent rectangles along the sides // [4] smaller circle // [5] bigger circle // public override void DefineCover () { double angleA, angleB; if (angleSweep >= 0) { angleA = angleStart + angleSweep; angleB = angleStart; } else { angleA = angleStart; angleB = angleStart + angleSweep; } double radPlus = m_radius + delta; PointF ptA = Auxi_Geometry .PointToPoint (m_center, angleA, radPlus); PointF ptB = Auxi_Geometry .PointToPoint (m_center, angleB, radPlus); PointF [] ptsA = new PointF [] {ptA, Auxi_Geometry .PointToPoint (ptA, angleA + Math .PI / 2, radPlus), Auxi_Geometry .PointToPoint (m_center, angleA + 3 * Math .PI / 4, radPlus * Math .Sqrt (2)), Auxi_Geometry .PointToPoint (m_center, angleA - Math .PI, radPlus) }; PointF [] ptsB = new PointF [] {ptB, Auxi_Geometry .PointToPoint (m_center, angleB - Math .PI, radPlus), Auxi_Geometry .PointToPoint (m_center, angleB - 3 * Math .PI / 4, radPlus * Math .Sqrt (2)), Auxi_Geometry .PointToPoint (ptB, angleB - Math .PI / 2, radPlus) }; CoverNode [] nodes = new CoverNode [6]; nodes [0] = new CoverNode (0, m_center, Auxi_Geometry.PointToPoint (m_center, angleStart + angleSweep, m_radius)); nodes [1] = new CoverNode (1, m_center, Auxi_Geometry.PointToPoint (m_center, angleStart, m_radius)); nodes [2] = new CoverNode (2, ptsA, Behaviour .Transparent); nodes [3] = new CoverNode (3, ptsB, Behaviour .Transparent); nodes [4] = new CoverNode (4, m_center, m_radius - delta, Cursors.SizeAll); nodes [5] = new CoverNode (5, m_center, m_radius + delta); if (bFixedStartingSide) { nodes [1] .Behaviour = Behaviour .Frozen; } cover = new Cover (nodes); cover .SetClearance (false); } The cover for the CircleSector class is similar to the cover of the Arc_Wide class for the case of sweep angles not bigger than 180 degrees, but there is one change to which I want to attract your attention. This difference is in the placement of nodes zero and one. When an object is described by the angle of one side (angleStart) and the sweep angle from this side to another (angleSweep), then it is natural to place the first node of the cover – the node with zero number – on the starting side and this is done, for example, in the Arc_Wide class. In the Circle Sector class the reversed order of these two nodes is used and this is not by mistake but it was purposely done. In the Arc_Wide class I could do either way and both of them would work fine. In the CircleSector class the demonstrated order of nodes on the sides is the only one to be correct. Why? Arc_Wide In the CircleSector class the minimal allowed sweep angle is zero in which case two nodes on the sides take the same position. Mover analyses the nodes of any cover exactly in the same order in which they are included into the cover. When you fix one side of the CircleSector object then the side of the angleStart is fixed while another side is movable. The node of this movable side must be the first one in the cover; otherwise you will move this side on position of another, release it, and after it there will be no chances to increase this sector again because the unmovable node will cover the movable one. (If you want to design similar object in which any side can be fixed, then you need to change the code in such a way that the first node is always on movable side.) When I want to fix one side of the sector, I change the behaviour of its node to Behaviour.Frozen. In the second part of this article I described all the values of the Behaviour enumeration. The Frozen value is used not too often but in some cases, like this one, it is very useful. Behaviour.Frozen Behaviour if (bFixedStartingSide) { nodes [1] .Behaviour = Behaviour .Frozen; } While organizing any movement dealing with the angles, you have to deal with the problem which is caused by the origin of the angles and their calculations. For calculation of linear distance between two points, there is no ambiguity and the distance is either zero or positive. When an object is moved around the circle the solution is ambiguous and often depends on the starting angle and allowed movements. If the movement starts at 20 degrees and is over at 30 degrees then, with the high probability, an object was moved for 10 degrees. If the movement starts at 170 degrees and finishes at -170 degrees then what was the sweep angle of such movement? It is easier if you are interested only in the current angle of an object or mouse cursor but what if you have to paint the sector between two positions? This problem is similar to the problem of jumping over the gap of an arc which I mentioned in one of the previous examples. With intervals of one or two years I try again and again to solve such problems by some calculations and comparison of angles only and each time I have a very complicated and not always correct code. For the CircleSector class I do not use the comparison of angles but check the relative positions of two points and one line. This is much easier and works without mistakes. Suppose that you press one side of the sector in order to change its sweep angle. Another side will not change its position throughout such movement and its end point is one point of our base line (ptSeg_0). Maximum allowed sweep angle is 180 degrees and the end point of the movable side in such position gives another point of the base line (ptSeg_1). Thus, the line through these two points is the boundary of the semicircle through which the movable side can go and all the possible end points of the movable side lie on one side of this line or on the line itself. When the sweep angle is equal 90 degrees, then the angle to the end point of the moving side is angleToIn and the point itself is ptIn. The calculation of both values is done by the CircleSector.StartResizing() method which is called at the initial moment of resizing. I’ll remind once more that node number five allows to change the radius of the sector and just now we are interested in the remaining part of the method. ptSeg_0 ptSeg_1 angleToIn ptIn CircleSector.StartResizing() public void StartResizing (Point pt, int iNode) { if (iNode == 5) { double angleBeam = Auxi_Geometry .Line_Angle (m_center, pt); PointF ptCursor = Auxi_Geometry .PointToPoint (m_center, angleBeam, m_radius); ptInnerLimit = Auxi_Geometry .PointToPoint (m_center, angleBeam, minRadius); ptOuterLimit = Auxi_Geometry .PointToPoint (m_center, angleBeam, 4000); Cursor .Position = form .PointToScreen (Point .Round (ptCursor)); } else { angleStart = Auxi_Common .LimitedRadian (angleStart); if (iNode == 0) // end line { ptSeg_0 = Auxi_Geometry .PointToPoint (m_center, angleStart, m_radius); ptSeg_1 = Auxi_Geometry .PointToPoint (m_center, angleStart + Math .PI, m_radius); angleToIn = (angleInit >= 0) ? (angleStart + Math .PI / 2) : (angleStart - Math .PI / 2); } else // start line { angleEnd = Auxi_Common .LimitedRadian (angleStart + angleSweep); ptSeg_0 = Auxi_Geometry .PointToPoint (m_center, angleEnd, m_radius); ptSeg_1 = Auxi_Geometry .PointToPoint (m_center, angleEnd + Math .PI, m_radius); angleToIn = (angleInit >= 0) ? (angleEnd - Math .PI / 2) : (angleEnd + Math .PI / 2); } ptIn = Auxi_Geometry .PointToPoint (m_center, angleToIn, m_radius); } } Checking of the possible movement is done in the CircleSector.MoveNode() method. The proposed angle of the movable side is calculated from the mouse position (ptM), but the movable side can take the proposed angle only if two points – previously calculated point (ptIn) and the current mouse location (ptM) – are not on the opposite sides of the line going through the points ptSeg_0 and ptSeg_1. Position of the mouse exactly on this line is allowed as the sweep angle of zero degree or 180 degrees are allowed. The Auxi_Geometry.OppositeSIdeOfLine() method analyses the positions of points ptM and ptIn in relation to the line going through the points ptSeg_0 and ptSeg_1. CircleSector.MoveNode() ptM Auxi_Geometry.OppositeSIdeOfLine() public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { bool bRet = false; PointF ptNearest; PointOfSegment nearestType; if (catcher == MouseButtons .Left) { double angleMouse = Auxi_Geometry .Line_Angle (m_center, ptM); if (i == 0) { if (!Auxi_Geometry.OppositeSideOfLine(ptSeg_0, ptSeg_1, ptM, ptIn)) { angleSweep=Auxi_Common.LimitedRadian (angleMouse - angleStart); bRet = true; } } else if (i == 1) { if (!Auxi_Geometry.OppositeSideOfLine(ptSeg_0, ptSeg_1, ptM, ptIn)) { angleSweep = Auxi_Common.LimitedRadian (angleEnd - angleMouse); angleStart = Auxi_Common.LimitedRadian (angleEnd - angleSweep); bRet = true; } } … … Figure 6 demonstrates two objects of the Pie class; each pie is a collection of the PieSlice elements. Pie PieSlice public class Pie { long m_id; List<PieSlice> m_slices = new List<PieSlice> (); As seen from this short code, the Pie object itself is not derived from the classical GraphicalObject class, so it has no cover, it cannot be registered in the mover’s queue, and thus cannot be involved in the process of moving / resizing in that standard way which was demonstrated with all other objects in the previous examples. At the same time these pies behave like any other movable and resizable objects: they can be moved around the screen and rotated, their sizes can be changed, any movement is started by pressing one or another part of a pie with a mouse, so the mover is involved in the whole process and supervises it in the familiar way. How is it organized? GraphicalObject When all the slices of one pie have their apices in the same central point and have the same radius, then such pie looks like a multicolored circle (the right pie at figure 6). It is not a problem to organize simple enough cover for such circle and it was already discussed in the previous part of this article. The design of cover depends not only on the shape of an object but also on the required movements of an object and its parts. For any circle, even multicolored with the sliding partitions, the relative simplicity of the cover is based on the fact that there is a single radius for all the parts and there are no gaps between the neighbouring sectors. On the other hand, the left pie at figure 6 shows that each slice of a Pie object may have its own radius and can be individually moved out and positioned separately from others; these two requirements make the design of cover in a standard way very complicated. I am not saying that it is impossible to organize a traditional cover for a pie with such movement requirements. It is possible but it will be too complicated and tiresome. Instead, I prefer to use in this case the idea of siblings: to have a set of similar looking elements that can in some cases move independently, but in other cases the movement of any of them causes the identical movement of all the linked elements. There are several examples with siblings in the book but I decided to use in this article the example with the pies because the cover for each slice is similar to what was already discussed in this part of the article. Slices of a pie may be placed with gaps between them but normally these gaps are not too big and the covers of the neighbouring slices overlap making a mess of lines on the screen; to see the pure cover for a single PieSlice element you need to move this slice really far away from the central point of its pie (figure 7). The cover of a single PieSlice element is similar to the cover of a circle sector from the previous example (compare figures 5 and 7); only the sweep angle of a slice is fixed at the moment of initialization, so there is no need in two nodes on the sides. There is also one additional circular node in the cover of the slices because a slice can be moved individually or cause the synchronous movement of all the siblings. To organize two different movements, the area of a slice is divided into two parts and these parts are covered by two different nodes; for better information of users about these differences in reaction, two parts of a slice are painted in slightly different colors. By pressing with the left button the inner (darker) part of a slice, the movement of the whole pie is started, and it is regardless of whether at this moment the pressed slice is positioned next to its neighbours or it was already moved out earlier. By pressing the outer (lighter) part you can start the individual movement of this slice along its central line. // order of nodes: [0, 1] transparent rectangles on the sides // [2] inner circle // [3] bigger circle - delta // [4] bigger circle + delta // public override void DefineCover () { CoverNode [] nodes = new CoverNode [5]; double radPlus = m_radius + delta; double angleA, angleB; if (angleSweep >= 0) { angleB = angleStart; angleA = angleStart + angleSweep; } else { angleA = angleStart; angleB = angleStart + angleSweep; } PointF [] ptsA, ptsB; if (Math .Abs (angleSweep) <= Math .PI) { PointF ptA = Auxi_Geometry .PointToPoint (ptApex, angleA, radPlus); PointF ptB = Auxi_Geometry .PointToPoint (ptApex, angleB, radPlus); ptsA = new PointF [] {ptA, Auxi_Geometry .PointToPoint (ptA, angleA + Math .PI / 2, radPlus), Auxi_Geometry .PointToPoint (ptApex, angleA + 3 * Math .PI / 4, radPlus * Math .Sqrt (2)), Auxi_Geometry .PointToPoint (ptApex, angleA - Math .PI, radPlus) }; ptsB = new PointF [] {ptB, Auxi_Geometry .PointToPoint (ptApex, angleB - Math .PI, radPlus), Auxi_Geometry .PointToPoint (ptApex, angleB - 3 * Math .PI / 4, radPlus * Math .Sqrt (2)), Auxi_Geometry .PointToPoint (ptB, angleB - Math.PI / 2, radPlus) }; } else { float r2 = 2 * m_radius; ptsA = new PointF [] {ptApex,)}; ptsB = new PointF [] {m_center,)}; } nodes [0] = new CoverNode (0, ptsA, Behaviour .Transparent); nodes [1] = new CoverNode (1, ptsB, Behaviour .Transparent); float rInner = m_radius * coefInner; nodes [2] = new CoverNode (2, ptApex, rInner, Cursors .SizeAll); nodes [3] = new CoverNode (3, ptApex, m_radius - delta, Cursors .Hand); nodes [4] = new CoverNode (4, ptApex, m_radius + delta, Cursors .Hand); cover = new Cover (nodes); cover .SetClearance (false); } One more change in the cover of a slice in comparison with the cover of a wide arc (see figure 3). In the wide arcs the number of nodes changes when the sweep angle crosses the boundary of 180 degrees; in the slices the number of nodes is always the same regardless of such angle. The code for the fixed number of nodes in the cover is easier, so, if you want, you can use the same technique for the wide arcs. When the sweep angle of a slice is greater than 180 degrees, then it is enough to have one transparent node along both sides. I still use two nodes (to have the same number of nodes for any slice), but the second of these two nodes is fictional. This node is absolutely the same as the previous one, so this second node is never used (touched) by mover. There can be another simple solution for this never used node: make it a small circular node far away of the screen. What special features and methods are used to organize the PieSlice movements and how the synchronous movement of siblings works? A slice of a pie looks similar to a sector of a circle so it has some identical fields like radius (m_radius), angle of one side (angleStart), sweep angle from this side to another (angleSweep), and the brush for painting a slice (m_brush). Well, there is also another lighter brush for painting the outer part of the slice (brushLight). Movement can be started at any inner point of both parts. The ratio between the inner and outer circles is set at such value (coefInner = 0.7f) that the areas of two parts are nearly equal and this makes the starting of both movements equally easy. brushLight coefInner public class PieSlice : GraphicalObject { PointF m_center; PointF ptApex; float m_radius; double angleStart; double angleSweep; SolidBrush m_brush; SolidBrush brushLight; List<pieslice> siblings = null; float coefInner = 0.7f; static int minRadius = 20; int delta = 3; int minDistanceCenterApex = 5;</pieslice> Usually a pie is organized as a circle and the apex of each slice (ptApex) is positioned in the central point of the whole object (m_center). Any slice can be moved outside from central point; throughout such individual movement of a slice its apex can move only along the central line of this slice. ptApex In addition to standard fields, there is a List of siblings – List<PieSlice> siblings – but this List is populated only for the pressed slice at the moment of pressing. For all other slices of the pie the List of siblings is empty and this prevents from organizing an infinitive loop of associated movements. List<PieSlice> siblings private void OnMouseDown (object sender, MouseEventArgs e) { ptMouse_Down = e .Location; if (mover .Catch (e .Location, e .Button)) { GraphicalObject grobj = mover .CaughtSource; if (grobj is PieSlice) { slicePressed = grobj as PieSlice; long idSlice = grobj .ID; long idPie = slicePressed .ParentID; for (int i = 0; i < pies .Count; i++) { if (idPie == pies [i] .ID) { iPiePressed = i; piePressed = pies [i]; break; } } List<PieSlice> siblings = new List<PieSlice> (); for (int i = 0; i < piePressed .Slices .Count; i++) { if (idSlice != piePressed .Slices [i] .ID) { siblings .Add (piePressed .Slices [i]); piePressed .Slices [i] .Siblings = null; } else { iSlicePressed = i; } } slicePressed .Siblings = siblings; if (e .Button == MouseButtons .Left) { foreach (PieSlice slice in piePressed .Slices) { slice .SaveInitialRadius (); } if (mover .CaughtNode == 3) { slicePressed .StartSliceMovement (e .Location); } else if (mover .CaughtNode == 4) { slicePressed .StartResizing (e .Location); } } else if (e .Button == MouseButtons .Right) { foreach (PieSlice slice in piePressed .Slices) { slice .StartRotation (e .Location); } } } } ContextMenuStrip = null; } When a slice is moved individually, its apex is moving along the radial line and there is a strict limit on one side of this movement: apex cannot move over the central point. There is no other limit for apex movement, but such outer limit is needed for calculations and it is defined as being far away and definitely outside the screen. When a slice is pressed with the left button somewhere in its outer part, then it is the starting moment for individual movement. The PieSlice.StartSliceMovement() is used to calculate the range for moving the pressed mouse; this range – [ptInnerLimit, ptOuterLimit] – is defined by the allowed movement for apex point and the initial position of the pressed mouse in relation to the apex. PieSlice.StartSliceMovement() ptInnerLimit, ptOuterLimit public void StartSliceMovement (Point ptMouse) { angleApex = Auxi_Common .LimitedRadian (angleStart + angleSweep / 2); angleApexMouse = Auxi_Geometry .Line_Angle (ptApex, ptMouse); distApexMouse = Auxi_Geometry .Distance (ptApex, ptMouse); ptInnerLimit = Auxi_Geometry .PointToPoint (m_center, angleApexMouse, distApexMouse); ptOuterLimit = Auxi_Geometry .PointToPoint (ptInnerLimit, angleApex, 4000); } When the inner part of the slice (node number two) is pressed and moved, then the PieSlice.MoveNode() method simply calls the PieSlice.Move() method. PieSlice.MoveNode() PieSlice.Move() public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { bool bRet = false; if (catcher == MouseButtons .Left) { PointF ptBase, ptNearest; PointOfSegment nearestType; if (i == 2) { // move all Move (dx, dy); } … … // ------------------------------------------------- Move public override void Move (int dx, int dy) { m_center += new Size (dx, dy); ptApex += new Size (dx, dy); if (siblings != null) { foreach (PieSlice slice in siblings) { slice .Center = m_center; } } } For the pressed slice the List of siblings was already populated, so for each member of this List, which means for all other slices of the pie, the PieSlice.Center property is called. This property defines the new m_center value and synchronously changes the apex position. In this way a movement of the pressed slice causes the synchronous movement of all other slices and there is no change in the view of the whole pie; only the m_center field and the apex position are renewed for all the slices. List PieSlice.Center When the outer part of the slice (node number three) is pressed and moved, then only the apex point of this pressed slice is moved inside the allowed range [ptInnerLimit, ptOuterLimit] while other slices are not affected. public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { … … else if (i == 3) // move one slice { Auxi_Geometry .Distance_PointSegment (ptM, ptInnerLimit, ptOuterLimit, out ptBase, out nearestType, out ptNearest); ptApex = Auxi_Geometry .PointToPoint (ptNearest, angleApexMouse + Math .PI, distApexMouse); Cursor .Position = form .PointToScreen (Point .Round (ptNearest)); bRet = true; } … … When the border of the slice (node number four) is pressed and moved, then the current mouse position is used for border placement and the slice radius is calculated from the cursor location. If the slice was previously moved outside from the central point of the pie then nothing else is done but if the apex of this slice is positioned in the central point of the pie then the radii of all the slices are changed with the same coefficient. public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { … … else if (i == 4) // change radius { Auxi_Geometry .Distance_PointSegment (ptM, ptInnerLimit, ptOuterLimit, out ptBase, out nearestType, out ptNearest); m_radius = Convert .ToSingle (Auxi_Geometry .Distance (ptApex, ptNearest)); Cursor .Position = form .PointToScreen (Point .Round (ptNearest)); bRet = true; if (m_center == ptApex) { double coef = (double) radiusStart / m_radius; foreach (PieSlice sector in siblings) { sector .Zoom (coef); } } } … … There is one more interesting object in the Form_Pie.cs. This is not a graphical primitive but a small group of the ElasticGroup class. Groups are not to be discussed in this article but they are described in details and with many examples in the book. In the current application such groups are used in this example and the next one. As all the objects in the user-driven applications, these groups are movable and their inner elements are individually movable / resizable. The change of the inner elements causes, if needed, the automatic change of the group sizes. You can change all the visibility parameters of the group by calling its tuning dialog; for this use the right button click anywhere inside the group. ElasticGroup The last example of this article differs from all the previous. First, there are main objects – big areas with holes – and different auxiliary objects (figure 8). In the current version the auxiliary elements are only circles and regular polygons, but it is not a problem to add more variants. These auxiliary objects definitely belong to the graphical primitives and this article is a good place for them. Among these auxiliary objects are simple circles which were shown in the second part of the article and different regular polygons which had to be shown in the first part but did not appear there. Well, I am going to show them here. But polygons, though they perfectly fit with the idea of this article, are only the auxiliary elements in this example. The main objects of this example consist mostly of the holes. These objects are movable but they have a lot of holes which are their most interesting parts. Can the holes be the main thing of an object? Well, in the childhood we liked to buy the fresh bagels or something that looked like bagel but with much bigger hole than in a standard bagel. It was like a bread in the form of a big ring but this thing was much tastier than any bread. So what was it that made the thing so tasty? Was it the hole? In this example we have the rectangular areas with holes and the auxiliary elements – the plugs. The idea is to select the needed plug, move it to the center of the hole, rotate the plug, and resize it in such a way that the plug fits well enough with the hole. If it happens, then both the plug and the covered hole are eliminated. The area disappears when all its holes are closed. public class Plug : GraphicalObject { Shape m_shape; PointF m_center; float m_radius; int nVertices; double m_angle; SolidBrush m_brush; Any plug is described by its shape (m_shape), central point (m_center), radius (m_radius; for regular polygons it is the distance from central point to vertices), number of vertices (only for polygons), angle of the first vertex (only for polygons), and the brush used for painting (m_brush). The cover for any Plug object is simple and consists of only two nodes (figure 9). It is the theoretical minimum of nodes number as any plug must be movable and resizable. Such simple cover for a circle was already shown in the second part of this article. For regular polygons the idea of a simplest cover is the same: there are two nodes copying the shape of the border; the first node is slightly less than an object itself; the second node is slightly bigger. m_shape public override void DefineCover () { CoverNode [] nodes = new CoverNode [2]; if (m_shape == Shape .Circle) { nodes [0] = new CoverNode (0, m_center, m_radius - minDelta, Cursors .SizeAll); nodes [1] = new CoverNode (1, m_center, m_radius + minDelta); } else { double delta = minDelta / Math .Cos (Math .PI / nVertices); PointF [] ptsInside = Auxi_Geometry .RegularPolygon (m_center, m_radius - delta, nVertices, m_angle); PointF [] ptsOutside = Auxi_Geometry .RegularPolygon (m_center, m_radius + delta, nVertices, m_angle); nodes [0] = new CoverNode (0, ptsInside); nodes [1] = new CoverNode (1, ptsOutside, Cursors .Hand); } cover = new Cover (nodes); } The whole inner node is used for moving the pressed plug so the plug can be moved and rotated by any inner point. The second node is used for resizing but only its narrow strip along the border of an object is opened for mover so the resizing is done by any border point. When any point inside this narrow strip of the second node is pressed, then the mouse cursor is moved to the nearest point of the border, the beam (angleBeam) for the allowed movement of the pressed mouse and two limiting points on this beam – ptInnerLimit and ptOuterLimit – are calculated. angleBeam public void StartResizing (Point ptMouse) { PointF ptCursor; double angleBeam; if (m_shape == Shape .Circle) { angleBeam = Auxi_Geometry .Line_Angle (m_center, ptMouse); ptCursor = Auxi_Geometry .PointToPoint (m_center, angleBeam, m_radius); ptInnerLimit = Auxi_Geometry .PointToPoint (m_center, angleBeam, minRadius); } else { PointF [] pts = Vertices; Auxi_Geometry.Distance_PointPolyline (ptMouse, pts,true, out ptCursor); scaling = m_radius / Auxi_Geometry.Distance (m_center, ptCursor);// >=1 angleBeam = Auxi_Geometry .Line_Angle (m_center, ptCursor); ptInnerLimit = Auxi_Geometry .PointToPoint (m_center, angleBeam, minRadius / scaling); } ptOuterLimit = Auxi_Geometry .PointToPoint (m_center, angleBeam, 4000); Cursor .Position = form .PointToScreen (Point .Round (ptCursor)); } Until the moment of release, the mouse can be moved only along the line between the two calculated points and the current position of the mouse is used as the border placement. public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { bool bRet = false; if (catcher == MouseButtons .Left) { if (i == 0) { Move (dx, dy); } else { PointF ptBase, ptNearest; PointOfSegment typeOfNearest; Auxi_Geometry .Distance_PointSegment (ptM, ptInnerLimit, ptOuterLimit, out ptBase, out typeOfNearest, out ptNearest); double dist = Auxi_Geometry .Distance (m_center, ptNearest); if (m_shape == Shape .Circle) { m_radius = Convert .ToSingle (dist); } else { m_radius = Convert .ToSingle (dist * scaling); } bRet = true; Cursor .Position = form .PointToScreen (Point .Round (ptNearest)); } } … … An area with the holes belongs to the AreaWithHoles class and this is the most interesting object in this example. AreaWithHoles public class AreaWithHoles : GraphicalObject { RectangleF rc; int nRow, nCol; List<Hole> holes = new List<Hole> (); SolidBrush brush; Number of holes in an area can vary but the central points of the holes are positioned along rows and columns and there are some simple limitations to prevent the overlapping of holes. Holes and plugs have the same set of shapes (Shape enumeration) and the Hole class has the same fields for definition of its objects as the Plug class. Shape public class Hole { Shape m_shape; PointF m_center; float m_radius; int nVertices; double m_angle; Holes are not movable. They are defined at the moment when any new area is organized and their parameters are used in design of the area’s cover. The sizes of an area can be different but they are defined at the moment of initialization and are not changed later. Thus, an area is a non-resizable but movable rectangle and it would be enough to have for it a cover consisting of a single rectangular node if not that problem of holes. The holes must provide the view through them and this is achieved by a simple painting. The holes must also provide the movement of the underlying objects and this is the classical case for using the transparent nodes. Each hole has a simple shape and any hole can be covered by a single node either circular or polygonal. Thus, we have a set of transparent nodes which precede the only non-transparent node in the cover. If there are N holes in the area, then the cover for such area consists of N+1 nodes and only the last one of them is not transparent. It is an unusual, very interesting in design, but simple enough cover. public override void DefineCover () { CoverNode [] nodes = new CoverNode [holes .Count + 1]; for (int i = 0; i < holes .Count; i++) { if (holes [i] .VerticesNumber == 0) { nodes [i] = new CoverNode (i, holes [i] .Center, holes [i] .Radius, Behaviour .Transparent); } else { nodes [i] = new CoverNode (i, holes [i] .Vertices, Behaviour .Transparent); } } nodes [holes .Count] = new CoverNode (holes .Count, rc); cover = new Cover (nodes); } From the point of design, it is an unusual cover and that was the only reason to demonstrate it here. It is a bit unusual and very interesting example of using the transparent nodes. From the point of moving, it is difficult to think out anything simpler. There is only one node to start any movement and this node can be used only for forward movement, so when this node is pressed then the AreaWithHoles.MoveNode() method must call the AreaWithHoles.Move() method. AreaWithHoles.MoveNode() AreaWithHoles.Move() public override bool MoveNode (int i, int dx, int dy, Point ptM, MouseButtons catcher) { bool bRet = false; if (catcher == MouseButtons .Left) { Move (dx, dy); bRet = true; } return (bRet); } The AreaWithHoles.Move() method has to change the location of the area according to the mouse movement and the only additional thing is the synchronous change of the central points for all the holes of the area. public override void Move (int dx, int dy) { rc .X += dx; rc .Y += dy; SizeF size = new SizeF (dx, dy); for (int i = 0; i < holes .Count; i++) { holes [i] .Center += size; } } This article demonstrates only the moving and resizing of the simplest screen elements that we use in our programs. I began with the primitive but often used cases of covers, which are combined of several nodes, and demonstrated the moving of lines and polygons. For the curved borders, the N-node covers are used. Though such covers may consist of significant number of nodes, all these nodes behave in the same way so the MoveNode() methods for classes with such covers are also simple. The next very powerful technique is the use of the transparent nodes. They can turn into unbelievably simple the covers that can be developed in a standard way but require a lot of very tiresome work. They can also give simple solutions for the cases in which the use of standard cover design is problematic. MoveNode() The objects demonstrated in this article are simple and I did it purposely. It’s like the use of differentiation and integration. You learn to use them on some set of simple examples but there is no fixed list of their full use. Everything is based on your knowledge of several main rules, experience, and the ability to think and to solve the new problems. Exactly the same happens with the movability of the real screen objects. Some of them just copy the examples from this article and you can use them immediately. You can find a lot of other examples from different areas in the book and its associated project with the codes in the. There are a lot of examples but they are not going to cover all your demands. Use the demonstrated technique and your brains. I tried not to include more complicated objects into the examples of this article but the ElasticGroup class already found its way into it. Maybe I’ll write about this and the ArbitraryGroup class in the future article but both classes are described and demonstrated in details in the mentioned book. We often need to use complex objects which allow individual, synchronous, and related movements of their parts; design and work of movable complex objects is also described in the book. ArbitraryGroup Using of objects’ movability in your programs is like the using of math: all depends on you and your ability.
http://www.codeproject.com/Articles/574431/What-can-be-simpler-than-graphical-primitives-Part?fid=1830089&tid=4538223
CC-MAIN-2014-52
refinedweb
10,872
61.16
- Introduction - all(iterable) - any(iterable) - cmp(x,y) - dict([arg]) - enumerate(iterable [,start=0]) - isinstance(object, classinfo) - pow(x, y [,z]) - zip([iterable, ]) - Conclusion Introduction In this article, we are going to see a couple use-case examples of some of the Python built-in functions. These functions can prove themselves extremely useful, and I think every Python coder should learn how to use them: they’re fast and well thought. For each function, I will provide two snippets: one without any built-in function, and the equivalent “pythonic” snippet. Do not get offended by the fact that I might copy and paste function description from the built-in functions documentation, these guys are better than me at writing documentation! all(iterable) Return True if all elements of the iterable are true (or if the iterable is empty). _all = True for item in iterable: if not item: _all = False break if _all: # do stuff This snippet is equivalent to : if all(iterable): # do stuff any(iterable) Return True if any element of the iterable is true. If the iterable is empty, return False. _any = False for item in iterable: if item: _any = True break if _any: # do stuff This snippet is equivalent to : if any(iterable): # do stuff cmp(x,y) Compare the two objects x and y and return an integer according to the outcome. The return value is negative if x < y, zero if x == y and strictly positive if x > y. So if you implemented a function looking like that: def compare(x,y): if x < y: return -1 elif x == y: return 0 else: return 1 Well folks… That’s just exactly what cmp(x,y) does. dict([arg]) Create a new data dictionary, optionally with items taken from arg. This arg part is usually unknown, but it’s extremely convenient! For example, if we had a list of 2-tuples that we’d like to convert to a dict, we could do that: l = [('Knights', 'Ni'), ('Monty', 'Python'), ('SPAM', 'SPAAAM')] d = dict() for tuple in l: d[tuple[0]] = tuple[1] # {'Knights': 'Ni', 'Monty': 'Python', 'SPAM': 'SPAAAM'} Or we could just do that: l = [('Knights', 'Ni'), ('Monty', 'Python'), ('SPAM', 'SPAAAM')] d = dict(l) # {'Knights': 'Ni', 'Monty': 'Python', 'SPAM': 'SPAAAM'} Isn’t is neater? enumerate(iterable [,start=0]) Ah, I really like this one. If you come from C, you probably write this kind of thing all the time: for i in range(len(list)): # do stuff with list[i], for example, print it print i, list[i] Well. Just don’t. It’s kind of unreadable. What you can do, is use enumerate(): for i, item in enumerate(list): # so stuff with item, for example print it print i, item You can also specify the starting point, using the start argument. isinstance(object, classinfo) Return true if the object argument is an instance of the classinfo argument, or of a (direct or indirect) subclass thereof. When you want to test the type of an object, the first thing that comes to mind is using the type() function. if type(obj) == type(dict): # do stuff elif type(obj) == type(list): # do other stuff ... Instead of doing that, you should use the nice isinstance function, because type was not primarily designed to test the type of an object: if isinstance(obj, dict): # do stuff elif isinstance(obj, list): # do other stuff ... pow(x, y [,z]) Return x to the power y; if z is present, return x to the power y, modulo z. If you want to compute x to the power of y, modulo z, you could use mod = (x ** y) % z Well yes. But if x=1234567, y=4567676 and z=56, my laptop computes this calculation in 64s. Instead of using the ** and % operators, use the pow(x, y, z) function: pow(1234567, 4567676, 56). On my laptop, it only takes 0.034s. It’s basically ~2000 times faster. Just food for thoughts. zip([iterable, ]) This function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. l1 = ('You gotta', 'the') l2 = ('love', 'built-in') out = [] if len(l1) == len(l2): for i in range(len(l1)): out.append((l1[i], l2[i])) # out = [('You gotta', 'love'), ('the', 'built-in)] The equivalent “pythonic” code would simply be l1 = ['You gotta', 'the'] l2 = ['love', 'built-in'] out = zip(l1, l2) # [('You gotta', 'love'), ('the', 'built-in)] Note that you can do the reverse operation using the * operator: print zip(*out) # [('You gotta', 'the'), ('love', 'built-in')] Conclusion Python built-in functions are very convenient, they are fast and optimized (they’re written in C), so they might be more efficient that anything you write with Python. Use them extensively :) I really think that every Python coder should have read the built-in documentation. Oh and there’s also a nice set of tools in the itertools module. Once again, they’re fast, optimized and awesome.
http://isbullsh.it/2012/05/05-Python-built-in-functions/
CC-MAIN-2014-52
refinedweb
833
65.96
How to TDD a List Implementation in Java Last modified: December 28, 2018 1. Overview In this tutorial, we’ll walk through a custom List implementation using the Test-Driven Development (TDD) process. This is not an intro to TDD, so we’re assuming you already have some basic idea of what it means and the sustained interest to get better at it. Simply put, TDD is a design tool, enabling us to drive our implementation with the help of tests. A quick disclaimer – we’re not focusing on creating efficient implementation here – just using it as an excuse to display TDD practices. 2. Getting Started First, let’s define the skeleton for our class: public class CustomList<E> implements List<E> { private Object[] internal = {}; // empty implementation methods } The CustomList class implements the List interface, hence it must contain implementations for all the methods declared in that interface. To get started, we can just provide empty bodies for those methods. If a method has a return type, we can return an arbitrary value of that type, such as null for Object or false for boolean. For the sake of brevity, we’ll omit optional methods, together with some obligatory methods that aren’t often used. 3. TDD Cycles Developing our implementation with TDD means that we need to create test cases first, thereby defining requirements for our implementation. Only then we’ll create or fix the implementation code to make those tests pass. In a very simplified manner, the three main steps in each cycle are: - Writing tests – define requirements in the form of tests - Implementing features – make the tests pass without focusing too much on the elegance of the code - Refactoring – improve the code to make it easier to read and maintain while still passing the tests We’ll go through these TDD cycles for some methods of the List interface, starting with the simplest ones. 4. The isEmpty Method The isEmpty method is probably the most straightforward method defined in the List interface. Here’s our starting implementation: @Override public boolean isEmpty() { return false; } This initial method definition is enough to compile. The body of this method will be “forced” to improve when more and more tests are added. 4.1. The First Cycle Let’s write the first test case which makes sure that the isEmpty method returns true when the list doesn’t contain any element: @Test public void givenEmptyList_whenIsEmpty_thenTrueIsReturned() { List<Object> list = new CustomList<>(); assertTrue(list.isEmpty()); } The given test fails since the isEmpty method always returns false. We can make it pass just by flipping the return value: @Override public boolean isEmpty() { return true; } 4.2. The Second Cycle To confirm that the isEmpty method returns false when the list isn’t empty, we need to add at least one element: @Test public void givenNonEmptyList_whenIsEmpty_thenFalseIsReturned() { List<Object> list = new CustomList<>(); list.add(null); assertFalse(list.isEmpty()); } An implementation of the add method is now required. Here’s the add method we start with: @Override public boolean add(E element) { return false; } This method implementation doesn’t work as no changes to the internal data structure of the list are made. Let’s update it to store the added element: @Override public boolean add(E element) { internal = new Object[] { element }; return false; } Our test still fails since the isEmpty method hasn’t been enhanced. Let’s do that: @Override public boolean isEmpty() { if (internal.length != 0) { return false; } else { return true; } } The non-empty test passes at this point. 4.3. Refactoring Both test cases we’ve seen so far pass, but the code of the isEmpty method could be more elegant. Let’s refactor it: @Override public boolean isEmpty() { return internal.length == 0; } We can see that tests pass, so the implementation of the isEmpty method is complete now. 5. The size Method This is our starting implementation of the size method enabling the CustomList class to compile: @Override public int size() { return 0; } 5.1. The First Cycle Using the existing add method, we can create the first test for the size method, verifying that the size of a list with a single element is 1: @Test public void givenListWithAnElement_whenSize_thenOneIsReturned() { List<Object> list = new CustomList<>(); list.add(null); assertEquals(1, list.size()); } The test fails as the size method is returning 0. Let’s make it pass with a new implementation: @Override public int size() { if (isEmpty()) { return 0; } else { return internal.length; } } 5.2. Refactoring We can refactor the size method to make it more elegant: @Override public int size() { return internal.length; } The implementation of this method is now complete. 6. The get Method Here’s the starting implementation of get: @Override public E get(int index) { return null; } 6.1. The First Cycle Let’s take a look at the first test for this method, which verifies the value of the single element in the list: @Test public void givenListWithAnElement_whenGet_thenThatElementIsReturned() { List<Object> list = new CustomList<>(); list.add("baeldung"); Object element = list.get(0); assertEquals("baeldung", element); } The test will pass with this implementation of the get method: @Override public E get(int index) { return (E) internal[0]; } 6.2. Improvement Usually, we’d add more tests before making additional improvements to the get method. Those tests would need other methods of the List interface to implement proper assertions. However, these other methods aren’t mature enough, yet, so we break the TDD cycle and create a complete implementation of the get method, which is, in fact, not very hard. It’s easy to imagine that get must extract an element from the internal array at the specified location using the index parameter: @Override public E get(int index) { return (E) internal[index]; } 7. The add Method This is the add method we created in section 4: @Override public boolean add(E element) { internal = new Object[] { element }; return false; } 7.1. The First Cycle The following is a simple test that verifies the return value of add: @Test public void givenEmptyList_whenElementIsAdded_thenGetReturnsThatElement() { List<Object> list = new CustomList<>(); boolean succeeded = list.add(null); assertTrue(succeeded); } We must modify the add method to return true for the test to pass: @Override public boolean add(E element) { internal = new Object[] { element }; return true; } Although the test passes, the add method doesn’t cover all cases yet. If we add a second element to the list, the existing element will be lost. 7.2. The Second Cycle Here’s another test adding the requirement that the list can contain more than one element: @Test public void givenListWithAnElement_whenAnotherIsAdded_thenGetReturnsBoth() { List<Object> list = new CustomList<>(); list.add("baeldung"); list.add(".com"); Object element1 = list.get(0); Object element2 = list.get(1); assertEquals("baeldung", element1); assertEquals(".com", element2); } The test will fail since the add method in its current form doesn’t allow more than one element to be added. Let’s change the implementation code: @Override public boolean add(E element) { Object[] temp = Arrays.copyOf(internal, internal.length + 1); temp[internal.length] = element; internal = temp; return true; } The implementation is elegant enough, hence we don’t need to refactor it. 8. Conclusion This tutorial went through a test-driven development process to create part of a custom List implementation. Using TDD, we can implement requirements step by step, while keeping the test coverage at a very high level. Also, the implementation is guaranteed to be testable, since it was created to make the tests pass. Note that the custom class created in this article is just used for demonstration purposes and should not be adopted in a real-world project. The complete source code for this tutorial, including the test and implementation methods left out for the sake of brevity, can be found over on GitHub.
https://www.baeldung.com/java-test-driven-list
CC-MAIN-2019-04
refinedweb
1,287
50.97
Recently, we were planning a study to evaluate the effect of an intervention on outcomes for very sick patients who show up in the emergency department. My collaborator had concerns about a phenomenon that she had observed in other studies that might affect the results – patients measured earlier in the study tend to be sicker than those measured later in the study. This might not be a problem, but in the context of a stepped-wedge study design (see this for a discussion that touches this type of study design), this could definitely generate biased estimates: when the intervention occurs later in the study (as it does in a stepped-wedge design), the “exposed” and “unexposed” populations could differ, and in turn so could the outcomes. We might confuse an artificial effect as an intervention effect. What could explain this phenomenon? The title of this post provides a hint: cases earlier in a study are more likely to be prevalent ones (i.e. they have been sick for a while), whereas later in the study cases tend to be incident (i.e. they only recently become sick). Even though both prevalent and incident cases are sick, the former may be sicker on average than the latter, simply because their condition has had more time develop. We didn’t have any data to test out this hypothesis (if our grant proposal is funded, we will be able to do that), so I decided to see if I could simulate this phenomenon. In my continuing series exploring simulation using Rcpp, simstudy, and data.table, I am presenting some code that I used to do this. Generating a population of patients The first task is to generate a population of individuals, each of whom starts out healthy and potentially becomes sicker over time. Time starts in month 1 and ends at some fixed point – in the first example, I end at 400 months. Each individual has a starting health status and a start month. In the examples that follow, health status is 1 through 4, with 1 being healthy, 3 is quite sick, and 4 is death. And, you can think of the start month as the point where the individual ages into the study. (For example, if the study includes only people 65 and over, the start month is the month the individual turns 65.) If an individual starts in month 300, she will have no measurements in periods 1 through 299 (i.e. health status will be 0). The first part of the simulation generates a start month and starting health status for each individual, and then generates a health status for each individual until the end of time. Some individuals may die, while others may go all the way to the end of the simulation in a healthy state. Rcpp function to generate health status for each period While it is generally preferable to avoid loops in R, sometimes it cannot be avoided. I believe generating a health status that depends on the previous health status (a Markov process) is one of those situations. So, I have written an Rcpp function to do this – it is orders of magnitude faster than doing this in R: #include // [[Rcpp::depends(RcppArmadillo)]] using namespace Rcpp; // [[Rcpp::export]] IntegerVector MCsim( unsigned int nMonths, NumericMatrix P, int startStatus, unsigned int startMonth ) { IntegerVector sim( nMonths ); IntegerVector healthStats( P.ncol() ); NumericVector currentP; IntegerVector newstate; unsigned int q = P.ncol(); healthStats = Rcpp::seq(1, q); sim[startMonth - 1] = startStatus; /* Loop through each month for each individual */ for (unsigned int i = startMonth; i < nMonths; i++) { /* new state based on health status of last period and probability of transitioning to different state */ newstate = RcppArmadillo::sample( healthStats, 1, TRUE, P.row(sim(i-1) - 1) ); sim(i) = newstate(0); } return sim; } Generating the data The data generation process is shown below. The general outline of the process is (1) define transition probabilities, (2) define starting health status distribution, (3) generate starting health statuses and start months, and (4) generate health statuses for each follow-up month. # Transition matrix for moving through health statuses P <- matrix(c(0.985, 0.015, 0.000, 0.000, 0.000, 0.950, 0.050, 0.000, 0.000, 0.000, 0.850, 0.150, 0.000, 0.000, 0.000, 1.000), nrow = 4, byrow = TRUE) maxFU = 400 nPerMonth = 350 N = maxFU * nPerMonth ddef <- defData(varname = "sHealth", formula = "0.80; 0.15; 0.05", dist = "categorical") # generate starting health values (1, 2, or 3) for all individuals set.seed(123) did <- genData(n = N, dtDefs = ddef) # each month, 350 age in to the sample did[, sMonth := rep(1:maxFU, each = nPerMonth)] # show table for 10 randomly selected individuals did[id %in% sample(x = did$id, size = 10, replace = FALSE)] ## id sHealth sMonth ## 1: 15343 2 44 ## 2: 19422 2 56 ## 3: 41426 1 119 ## 4: 50050 1 143 ## 5: 63042 1 181 ## 6: 83584 1 239 ## 7: 93295 1 267 ## 8: 110034 1 315 ## 9: 112164 3 321 ## 10: 123223 1 353 # generate the health status history based on the transition matrix dhealth <- did[, .(sHealth, sMonth, health = MCsim(maxFU, P, sHealth, sMonth)), keyby = id] dhealth[, month := c(1:.N), by = id] dhealth ## id sHealth sMonth health month ## 1: 1 1 1 1 1 ## 2: 1 1 1 1 2 ## 3: 1 1 1 1 3 ## 4: 1 1 1 1 4 ## 5: 1 1 1 1 5 ## --- ## 55999996: 140000 1 400 0 396 ## 55999997: 140000 1 400 0 397 ## 55999998: 140000 1 400 0 398 ## 55999999: 140000 1 400 0 399 ## 56000000: 140000 1 400 1 400 Simulation needs burn-in period The simulation process itself is biased in its early phases as there are too many individuals in the sample who have just aged in compared to those who are “older”. (This is sort of the the reverse of the incidence – prevalence bias.) Since individuals tend to have better health status when they are “younger”, the average health status of the simulation in its early phases is biased downwards by the preponderance of young individuals in the population. This suggests that any evaluation of simulated data needs to account for a “burn-in” period that ensures there is a mix of “younger” and “older” individuals. To show this, I have calculated an average health score for each period of the simulation and plotted the results. You can see that the sample stabilizes after about 200 months in this simulation. # count number of individuals with a particular heath statust each month cmonth <- dhealth[month > 0, .N, keyby = .(month, health)] cmonth ## month health N ## 1: 1 0 139650 ## 2: 1 1 286 ## 3: 1 2 47 ## 4: 1 3 17 ## 5: 2 0 139300 ## --- ## 1994: 399 4 112203 ## 1995: 400 1 18610 ## 1996: 400 2 6515 ## 1997: 400 3 2309 ## 1998: 400 4 112566 # transform data from "long" form to "wide" form and calculate average mtotal <- dcast(data = cmonth, formula = month ~ health, fill = 0, value.var = "N") mtotal[, total := `1` + `2` + `3`] mtotal[, wavg := (`1` + 2*`2` + 3*`3`)/total] mtotal ## month 0 1 2 3 4 total wavg ## 1: 1 139650 286 47 17 0 350 1.231429 ## 2: 2 139300 558 106 32 4 696 1.244253 ## 3: 3 138950 829 168 45 8 1042 1.247601 ## 4: 4 138600 1104 215 66 15 1385 1.250542 ## 5: 5 138250 1362 278 87 23 1727 1.261726 ## --- ## 396: 396 1400 18616 6499 2351 111134 27466 1.407813 ## 397: 397 1050 18613 6537 2321 111479 27471 1.406938 ## 398: 398 700 18587 6561 2323 111829 27471 1.407957 ## 399: 399 350 18602 6541 2304 112203 27447 1.406201 ## 400: 400 0 18610 6515 2309 112566 27434 1.405810 ggplot(data = mtotal, aes(x=month, y=wavg)) + geom_line() + ylim(1.2, 1.5) + geom_hline(yintercept = 1.411, lty = 3) + geom_vline(xintercept = 200, lty = 3) + xlab("Month") + ylab("Average health status") + theme(panel.background = element_rect(fill = "grey90"), panel.grid = element_blank(), plot.title = element_text(size = 12, vjust = 0.5, hjust = 0) ) + ggtitle("Average health status of simulated population") Generating monthly study cohorts Now we are ready to see if we can simulate the incidence – prevalence bias. The idea here is to find the first month during which an individual (1) is “active” (i.e. the period being considered is on or after the individual’s start period), (2) has an emergency department visit, and (3) whose health status has reached a specified threshold. We can set a final parameter that looks back some number of months (say 6 or 12) to see if there have been any previous qualifying emergency room visits before the study start period (which in our case will be month 290 to mitigate an burn-in bias identified above). This “look-back” will be used to mitigate some of the bias by creating a washout period that makes the prevalent cases look more like incident cases. This look-back parameter is calculated each month for each individual using an Rcpp function that loops through each period: #include using namespace Rcpp; // [[Rcpp::export]] IntegerVector cAddPrior(IntegerVector idx, IntegerVector event, int lookback) { int nRow = idx.length(); IntegerVector sumPrior(nRow, NA_INTEGER); for (unsigned int i = lookback; i < nRow; i++) { IntegerVector seqx = Rcpp::seq(i-lookback, i-1); IntegerVector x = event[seqx]; sumPrior[i] = sum(x); } return(sumPrior); } Generating a single cohort The following code (1) generates a population (as we did above), (2) generates emergency department visits that are dependent on the health status (the sicker an individual is, the more likely they are to go to the ED), (3) calculates the number of eligible ED visits during the look-back period, and (4) creates the monthly cohorts based on the selection criteria. At the end, we calculate average health status for the cohort by month of cohort – this will be used to illustrate the bias. maxFU = 325 nPerMonth = 100 N = maxFU * nPerMonth START = 289 # to allow for adequate burn-in HEALTH = 2 LOOKBACK = 6 # how far to lookback set.seed(123) did <- genData(n = N, dtDefs = ddef) did[, sMonth := rep(1:maxFU, each = nPerMonth)] healthStats <- did[, .(sHealth, sMonth, health = MCsim(maxFU, P, sHealth, sMonth)), keyby = id] healthStats[, month := c(1:.N), by = id] # eliminate period without status measurement (0) & death (4) healthStats <- healthStats[!(health %in% c(0,4))] # ensure burn-in by starting with observations far # into simulation healthStats <- healthStats[month > (START - LOOKBACK)] # set probability of emergency department visit healthStats[, pED := (health == 1) * 0.02 + (health == 2) * 0.10 + (health == 3) * 0.20] # generate emergency department visit healthStats[, ed := rbinom(.N, 1, pED)] healthStats[, edAdj := ed * as.integer(health >= HEALTH)] # if you want to restrict healthStats[, pSum := cAddPrior(month, edAdj, lookback = LOOKBACK), keyby=id] # look at one individual healthStats[id == 28069] ## id sHealth sMonth health month pED ed edAdj pSum ## 1: 28069 1 281 1 284 0.02 0 0 NA ## 2: 28069 1 281 1 285 0.02 0 0 NA ## 3: 28069 1 281 1 286 0.02 0 0 NA ## 4: 28069 1 281 1 287 0.02 0 0 NA ## 5: 28069 1 281 2 288 0.10 0 0 NA ## 6: 28069 1 281 2 289 0.10 0 0 NA ## 7: 28069 1 281 2 290 0.10 1 1 0 ## 8: 28069 1 281 2 291 0.10 0 0 1 ## 9: 28069 1 281 2 292 0.10 0 0 1 ## 10: 28069 1 281 2 293 0.10 0 0 1 ## 11: 28069 1 281 2 294 0.10 0 0 1 ## 12: 28069 1 281 3 295 0.20 1 1 1 # cohort includes individuals with 1 prior ed visit in # previous 6 months cohort <- healthStats[edAdj == 1 & pSum == 0] cohort <- cohort[, .(month = min(month)), keyby = id] cohort ## id month ## 1: 53 306 ## 2: 82 313 ## 3: 140 324 ## 4: 585 291 ## 5: 790 299 ## --- ## 3933: 31718 324 ## 3934: 31744 325 ## 3935: 31810 325 ## 3936: 31860 325 ## 3937: 31887 325 # estimate average health status of monthly cohorts cohortStats <- healthStats[cohort, on = c("id","month")] sumStats <- cohortStats[ , .(avghealth = mean(health), n = .N), keyby = month] head(sumStats) ## month avghealth n ## 1: 290 2.248175 137 ## 2: 291 2.311765 170 ## 3: 292 2.367347 147 ## 4: 293 2.291925 161 ## 5: 294 2.366906 139 ## 6: 295 2.283871 155 Exploring bias Finally, we are at the point where we can see what, if any, bias results in selecting our cohorts under the scenario I’ve outlined above. We start by generating multiple iterations of populations and cohorts and estimating average health status by month under the assumption that we will have a look-back period of 0. That is, we will accept an individual into the first possible cohort regardless of her previous emergency department visit history. The plot below shows average across 1000 iterations. What we see is that the average health status of the cohorts in the first 20 months or so exceed the long run average. The incidence – prevalence bias is extremely strong if we ignore prior ED history! Taking history into account Once we start to incorporate ED history by using look-back periods greater than 0, we see that we can reduce bias considerably. The two plots below show the results of using look-back periods of 6 and 12 months. Both have reduced bias, but only at 12 months are we approaching something that actually looks desirable. In fact, under this scenario, we’d probably like to go back 24 months to eliminate the bias entirely. Of course, these particular results are dependent on the simulation assumptions, so determining an appropriate look-back period will certainly depend on the actual data. (When we do finally get the actual data, I will follow-up to let you know what kind of adjustment we needed to make in the real, non-simulated world.) >...
https://www.r-bloggers.com/should-we-be-concerned-about-incidence-prevalence-bias/
CC-MAIN-2017-34
refinedweb
2,300
59.94
generate ssh config Project Description Installation Requirements Uses docopt: yum install python-docopt (or python3-docopt) or: pip install docopt (or pip3 install docopt) Also requires my scripts package: git clone cd scripts ./install Introduction The hosts that you would like to connect to are described in the hosts.py file. A very simple hosts.py: gensshconfig The above hosts.py file is converted into the following ssh config file: # SSH Configuration for generic network # Generated at 1:04 PM on 22 July 2014. # # HOSTS # host zeebra user herbie hostname zeebra.he.net forwardAgent no The transformation between a host entry in the hosts.py file and the ssh config file could be affected by the network you are on and any command line options that are specified to gensshconfig, but in this case it is not. Notice that the class name is converted to lower case when creating the hostname. Configuration The configuration of sshconfig involves two files, config.py and hosts.py. In config.py you describe networks, proxies, locations, and general defaults. In hosts.py, you describe the machines you would like to connect to on a regular basis. Config A typical config.py file would start with would look like: # # SSH Config -- Basic Network Configuration # # Defines known networks. Recognizes networks by the MAC addresses of their # routers, can use this information to set default location, ports, init # script and proxy. # from sshconfig import NetworkEntry # Characteristics of the known networks class Home(NetworkEntry): routers = ['a8:93:14:8a:e4:31'] # Router MAC addresses location = 'home' class Work(NetworkEntry): routers = ['f0:90:76:9c:b1:37'] # Router MAC addresses location = 'home' class WorkWireless(NetworkEntry): routers = ['8b:38:10:3c:1e:fe'] # Router MAC addresses location = 'home' class Library(NetworkEntry): # Blocks port 22 routers = [ 'e4:c7:22:f2:9a:46', # Wireless '00:15:c7:01:a7:00', # Wireless '00:13:c4:80:e2:89', # Ethernet '00:15:c7:01:a7:00', # Ethernet ] ports = [80, 443] location = 'home' init_script = 'activate_library_network' class DC_Peets(NetworkEntry): routers = ['e4:15:c4:01:1e:95'] # Wireless location = 'washington' init_script = 'unlock-peets' # Preferred networks, in order. If one of these networks are not available, # another will be chosen at random from the available networks. PREFERRED_NETWORKS = ['Work'] # Location of output file (must be an absolute path) CONFIG_FILE = "~/.ssh/config" # Attribute overrides for all hosts OVERRIDES = """ Ciphers aes256-ctr,aes128-ctr,arcfour256,arcfour,aes256-cbc,aes128-cbc """ # Attribute defaults for all hosts DEFAULTS = """ ForwardX11 no # This will keep a seemingly dead connection on life support for 10 # minutes before giving up on it. TCPKeepAlive no ServerAliveInterval 60 ServerAliveCountMax 10 # Enable connection sharing ControlMaster auto ControlPath /tmp/ssh_mux_%h_%p_%r """ # Known proxies PROXIES = { 'work_proxy': 'socat - PROXY:webproxy.ext.workinghard.com:%h:%p,proxyport=80', 'school_proxy': 'proxytunnel -q -p sproxy.fna.learning.edu:1080 -d %h:%p', 'tunnelr_proxy': 'ssh tunnelr -W %h:%p', # it is not necessary to add tunnelr as a proxy, you can always # specify a host as a proxy, and if you do you will get this # proxyCommand by default. The only benefit adding this entry to # PROXIES provides is that tunnelr is listed in the available proxies # when using the --available command line option. } # My locations LOCATIONS = { 'home': 'San Francisco California', 'washington': 'Washington DC', 'toulouse': 'Toulouse France', } All of these entries are optional. Subclassing NetworkEntry creates a network description that is described with the attributes. A subclass will inherit all the attributes of its parent. The following attributes are interpreted. - key: - Name used when specifying the network. If not present, the class name in lower case is used. - description: - A description of the network. If not given, the class name is use with the following modifications: - underscores are replaced by spaces - a space is added to separate a lower case to upper case transition - double underscores are replaced by ‘ - ‘ - routers: - A list of MAC addresses for the router that are used to identify the network. To find these, connect to the network and run the /sbin/arp command. - location: - The default setting for the location (value should be chosen from LOCATIONS) when this network is active. - ports: - The default list of ports that should be available when this network is active. - init_script: A script that should be run when on this network. May be a string or a list of strings. If it is a list of strings they are joined together to form a command. The unlock-peets script is included as an example of such a script. It is used to automate the process of accepting the terms & conditions on the click-through page. Unfortunately, while unlock-peets represents a reasonable example, each organization requires the basic script to be customized to fit their particular click-through pages. To write a script it is helpful to understand how the unlocking process works. The organizations that lock their wifi generally allow your computer to directly connect to their access point, however their firewall is configured to block any network traffic from unapproved devices. As you connect, they grab the MAC address of your computer’s wifi. They then watch for web requests emanating from your computer, which they then discard and redirect your browser to their router which offers up a page that allows you to accept their terms and conditions. This page is customized particularly for you: it contains your MAC address. When you accept, your MAC address is returned to the router along with your acceptance, and the router then rewrites its firewall rules to allow your computer to access the internet. After some period of time (an hour? a day?) the rules are discarded and you loose your connection to the Internet. All of this tremendously abuses Internet protocols, and causes its visitors headaches because this hack is not compatible with HTTPS or VPN traffic. So for it to work, you must request a plain HTTP site with any VPNs disabled, and plain HTTP sites are disappearing. The headaches this cause seems to provide very little value to anyone. They break the Internet so as to force you to accept their terms and conditions, which they presumably feel protects them from lawsuits, but it is hard to imagine anybody suing the owner of a public wifi for the actions of an anonymous user. But I digress. Debugging init scripts can be difficult because once you successfully unlock the wifi, it generally remains unlocked for at least an hour, and maybe until the next day, which limits your ability to test your script. However, in Linux it is possible to change your MAC address. If you do so, the router no longer recognizes you and you have to go through the unlock process again, which allows you to thoroughly exercise and debug your script. To change your MAC address, right-click on the Network Manager applet, and select ‘Edit Connection …’, select the connection you are using, and click ‘Edit’, then copy the ‘Device MAC address’ into ‘Cloned MAC address’ and change a few digits. The digits are hexadecimal, so choose values between 0-9A-F. Then click ‘Save’, ‘Close’, and restart your network connection. - proxy: - The name of the proxy to use by default when this network is active. PREFERRED_NETWORKS specifies a list of preferred networks. It is useful your computer can access multiple networks simultaneously, such as when you are using a laptop connected to a wired network but you did not turn off the wireless networking. SSH is configured for the first network on the PREFERRED_NETWORKS list that is available. If none of the preferred networks are available, then an available known network is chosen at random. If no known networks are available, SSH is configured for a generic network. In the example, the Work network is listed in the preferred networks because Work and WorkWireless would expected to often be available simultaneously, and Work is the wired network and is considerably faster than WorkWireless. CONFIG_FILE specifies the name of the ssh config file; the default is ~/.ssh/config. The path to the SSH config file should be an absolute path. OVERRIDES contains ssh directives that are simply added to the top of the ssh config file. Such settings override any settings specified in the host entries. Do not place ForwardAgent in OVERRIDES. It will be added on the individual hosts and only set to yes if they are trusted. DEFAULTS contains ssh directives that are added to the bottom of the ssh config file. Such settings act as defaults. PROXIES allows you to give names to proxyCommand values. These names can then be specified on the command line so that all hosts use the proxy. LOCATIONS is a dictionary of place names and descriptions of where you are likely to be located. It is needed only if you use the locations feature. Hosts A more typical hosts.py file would generally contain many host specifications. You subclass HostEntry to specify a host and then add attributes to configure its behavior. Information you specify is largely just placed in the ssh config file unmodified except: - The class name is converted to lower case to make it easier to type. - ‘forwardAgent’ is added and set based on whether the host is trusted. - Any attribute that starts with underscore (_) is ignored and so can be used to hold intermediate values. In most cases, whatever attributes you add to your class get converted into fields in the ssh host description. However, there are several attributes that are intercepted and used by SSH Config. They are: - description: - A string that is added as a comment above the ssh host description. - aliases: - A list of strings, each of which is added to the list of names that can be used to refer to this host. - trusted: - Indicates that the base host should be trusted. Currently that means that agent forwarding will be configured for the non-tunneling version of the host. - tun_trusted: - Indicates that the tunneling version of the host should be trusted. Currently that means that agent forwarding will be configured for the tunneling version of the host. - guests: - A list of machines that are accessed using this host as a proxy. Here is a example: class DigitalOcean(HostEntry): description = "Web server" aliases = ['do', 'web'] user = 'herbie' hostname = '107.170.65.89' identityFile = 'digitalocean' This results in the following entry in the ssh config file: # Web server host digitalocean do web user herbie hostname 107.170.65.89 identityFile /home/herbie/.ssh/digitalocean identitiesOnly yes forwardAgent no When specifying the identityFile, you can either use an absolute or relative path. The relative path will be relative to the directory that will contain the ssh config file. Specifying identityFile results in identitiesOnly being added. SSHconfig provides two utility functions that you can use in your hosts file to customize it based on either the hostname or username that are being used when gensshconfig is run. They are gethostname() and getusername() and both can be imported from sshconfig. For example, I generally use a different identity (ssh key) from each machine I operate from. To implement this, at the top of my hosts file I have: from sshconfig import gethostname class DigitalOcean(HostEntry): description = "Web server" aliases = ['do', 'web'] user = 'herbie' hostname = '107.170.65.89' identityFile = gethostname() Ports If a host is capable of accepting connections on more than one port, you should use the choose() method of the ports object to select the appropriate port. For example: from sshconfig import HostEntry, ports class Tunnelr(HostEntry): description = "Proxy server" user = 'kundert' hostname = 'fremont.tunnelr.com' port = ports.choose([22, 80, 443]) identityFile = 'tunnelr' An entry such as this would be used when sshd on the host has been configured to accept ssh traffic on a number of ports, in this case, ports 22, 80 and 443. The actual port used is generally the first port given in the list provided to choose(). However this behavior can be overridden with the –ports (or -p) command line option. For example: gensshconfig --ports=80,443 or: gensshconfig -p80,443 This causes ports.choose() to return the first port given in the –ports specification if it is given anywhere in the list of available ports given as an argument to choose(). If the first port does not work, it will try to return the next one given, and so on. So in this example, port 80 would be returned. If -p443,80 were specified, then port 443 would be used. You can specify as many ports as you like in a –ports specification, just separate them with a comma and do not add spaces. In this next example, we customize the proxy command based on the port chosen: class Home(HostEntry): description = "Home server" user = 'herbie' hostname = { 'home': '192.168.1.32', 'default': '231.91.164.05' } port = ports.choose([22, 80]) if port in [80]: proxyCommand = 'socat - PROXY:%h:127.0.0.1:22,proxyport=%p' identityFile = 'my2014key' dynamicForward = 9999 An entry such as this would be used if sshd is configured to directly accept traffic on port 22, and Apache is configured to act as a proxy for ssh on ports 80 and 443 (see SSH via HTTP <>. If you prefer, you can use proxytunnel rather than socat in the proxy command: proxyCommand = 'proxytunnel -q -p %h:%p -d 127.0.0.1:22' Attribute Descriptions Most attributes can be given as a two element tuple. The first value in the pair is used as the value of the attribute, and the second should be a string that is added as a comment to describe the attribute. For example: hostname = '65.19.130.60', 'fremont.tunnelr.com' is converted to: hostname 65.19.130.60 # fremont.tunnelr.com Hostname The hostname may be a simple string, or it may be a dictionary. If given as a dictionary, each entry will have a string key and string value. The key would be the name of the network (in lower case) and the value would be the hostname or IP address to use when on that network. One of the keys may be ‘default’, which is used if the network does not match one of the given networks. For example: class Home(HostEntry): hostname = { 'home': '192.168.0.1', 'default': '74.125.232.64' } When on the home network, this results in an ssh host description of: host home hostname 192.168.0.1 forwardAgent no When not on the home network, it results in an ssh host description of: host home hostname 74.125.232.64 forwardAgent no The ssh config file entry for this host will not be generated if not on one of the specified networks and if default is not specified. It is sometimes appropriate to set the hostname based on which host you are on rather than on which network. For example, if a sshconfig host configuration file is shared between multiple machines, then it is appropriate to give the following for a host which may become localhost: class Home(HostEntry): if gethostname() == 'home': hostname = '127.0.0.1' else: hostname = '192.168.1.4' Location It is also possible to choose the hostname based on location. The user specifies location using: gensshconfig --location=washington or: gensshconfig -lwashington You can get a list of the known locations using: gensshconfig --available To configure support for locations, you first specify your list of known locations in LOCATIONS: LOCATIONS = { 'home': 'San Francisco California', 'washington': 'Washington DC', 'toulouse': 'Toulouse France', } Then you must configure your hosts to use the location. To do so, you use the choose() method to set the location. The choose() method requires three things: - A dictionary that gives hostnames or IP addresses and perhaps descriptive comment as a function of the location. These locations are generally specific to the host. - Another dictionary that maps the user’s locations into the host’s locations. - A default location. For example: from sshconfig import HostEntry, locations, ports class Tunnelr(HostEntry): description = "Commercial proxy server" user = 'kundert' hostname = locations.choose( locations = { 'sf': ("65.19.130.60", "Fremont, CA, US (fremont.tunnelr.com)"), 'la': ("173.234.163.226", "Los Angeles, CA, US (la.tunnelr.com)"), 'wa': ("209.160.33.99", "Seattle, WA, US (seattle.tunnelr.com)"), 'tx': ("64.120.56.66", "Dallas, TX, US (dallas.tunnelr.com)"), 'va': ("209.160.73.168", "McLean, VA, US (mclean.tunnelr.com)"), 'nj': ("66.228.47.107", "Newark, NJ, US (newark.tunnelr.com)"), 'ny': ("174.34.169.98", "New York City, NY, US (nyc.tunnelr.com)"), 'london': ("109.74.200.165", "London, UK (london.tunnelr.com)"), 'uk': ("31.193.133.168", "Maidenhead, UK (maidenhead.tunnelr.com)"), 'switzerland': ("178.209.52.219", "Zurich, Switzerland (zurich.tunnelr.com)"), 'sweden': ("46.246.93.78", "Stockholm, Sweden (stockholm.tunnelr.com)"), 'spain': ("37.235.53.245", "Madrid, Spain (madrid.tunnelr.com)"), 'netherlands': ("89.188.9.54", "Groningen, Netherlands (groningen.tunnelr.com)"), 'germany': ("176.9.242.124", "Falkenstein, Germany (falkenstein.tunnelr.com)"), 'france': ("158.255.215.77", "Paris, France (paris.tunnelr.com)"), }, maps={ 'home': 'sf', 'washington': 'va', 'toulouse': 'france', }, default='sf' ) port = ports.choose([ 22, 21, 23, 25, 53, 80, 443, 524, 5555, 8888 ]) identityFile = 'tunnelr' Now if the user specifies –location=washington on the command line, then it is mapped to the host location of va, which becomes mclean.tunnelr.com (209.160.73.168). Normally, users are expected to choose a location from the list given in LOCATIONS. As such, every maps argument should support each of those locations. However, a user may given any location they wish. If the location given is not found in maps, then it will be looked for in locations, and if it is not in locations, the default location is used. Forwards When forwards are specified, two ssh host entries are created. The first does not include forwarding. The second has the same name with ‘-tun’ appended, and includes the forwarding. The reason this is done is that once one connection is setup with forwarding, a second connection that also attempts to performing forwarding will produce a series of error messages indicating that the ports are in use and so cannot be forwarded. Instead, you should only use the tunneling version once when you want to set up the port forwards, and you the base entry at all other times. Often forwarding connections are setup to run in the background as follows: ssh -f -N home-tun If you have set up connection sharing using ControlMaster and then run: ssh home SSH will automatically share the existing connection rather than starting a new one. Both local and remote forwards should be specified as lists. The lists can either be simple strings, or can be tuple pairs if you would like to give a description for the forward. The string that describes the forward has the syntax: ‘lclHost:lclPort rmtHost:rmtPort’ where lclHost and rmtHost can be either a host name or an IP address and lclPort and rmtPort are port numbers. For example: '11025 localhost:25' The local host is used to specify what machines can connect to the port locally. If the GatewayPorts setting is set to yes on the SSH server, then forwarded ports are accessible to any machine on the network. If the GatewayPorts setting is no, then the forwarded ports are only available from the local host. However, if GatewayPorts is set to clientspecified, then the accessibility of the forward address is set by the local host specified. For example: The VNC function is provided for converting VNC host and display number information into a setting suitable for a forward. You can give the local display number, the remote display number, and the remote host name (from the perspective of the remote ssh server) and the local host name. For example: VNC(lclDispNum=1, rmtHost='localhost', rmtDispNum=12) This allows a local VNC client viewing display 1 to show the VNC server running on display 12 of the SSH server host. If you give a single number, it will use it for both display numbers. If you don’t give a name, it will use localhost as the remote host (in this case localhost represents the remote ssh server). So the above VNC section to the local forwards could be shortened to: VNC(12) if you configured the local VNC client to connect to display 12. An example of many of these features: from sshconfig import HostEntry, ports, locations, VNC class Home(HostEntry): description = "Lucifer Home Server" aliases = ['lucifer'] user = 'herbie' hostname = { 'home': '192.168.0.1', 'default': '74.125.232.64' } port = ports.choose([22, 80]) if port in [80]: proxyCommand = 'socat - PROXY:%h:127.0.0.1:22,proxyport=%p' trusted = True identityFile = gethostname() localForward = [ ('30025 localhost:25', "Mail - SMTP"), ('30143 localhost:143', "Mail - IMAP"), ('34190 localhost:4190', "Mail - Seive"), ('39100 localhost:9100', "Printer"), (VNC(lclDispNum=1, rmtDispNum=12), "VNC"), ] dynamicForward = 9999 On a foreign network it produces: # Lucifer Home Server host home lucifer user herbie hostname 74.125.232.64 port = 22 identityFile /home/herbie/.ssh/teneya identitiesOnly yes forwardAgent yes # Lucifer Home Server (with forwards) host home-tun lucifer-tun user herbie hostname 74.125.232.64 port = 22 identityFile /home/herbie/.ssh/teneya identitiesOnly yes forwardAgent yes localForward 11025 localhost:25 # Mail - SMTP localForward 11143 localhost:143 # Mail - IMAP localForward 14190 localhost:4190 # Mail - Sieve localForward 19100 localhost:9100 # Printer localForward 5901 localhost:5912 # VNC dynamicForward 9999 exitOnForwardFailure yes Guests The ‘guests’ attribute is a list of hostnames that would be accessed by using the host being described as a proxy. The attributes specified are shared with its guests (other than hostname, port, and port forwards). The name used for the guest in the ssh config file would be the hostname combined with the guest name using a hyphen. For example: class Farm(HostEntry): description = "Entry Host to Machine farm" aliases = ['earth'] user = 'herbie' hostname = { 'work': '192.168.1.16', 'default': '231.91.164.92' } trusted = True identityFile = 'my2014key' guests = [ ('jupiter', "128GB Compute server"), ('saturn', "96GB Compute server"), ('neptune', "64GB Compute server"), ] localForward = [ (VNC(dispNum=21, rmtHost=jupiter), "VNC on Jupiter"), (VNC(dispNum=22, rmtHost=saturn), "VNC on Saturn"), (VNC(dispNum=23, rmtHost=neptune), "VNC on Neptune"), ] On a foreign network produces: # Entry Host to Machine Farm host farm earth user herbie hostname 231.91.164.92 identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent yes # Entry Host to Machine Farm (with forwards) host farm-tun earth-tun user herbie hostname 231.91.164.92 identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent yes localForward 5921 jupiter:5921 # VNC on jupiter localForward 5922 saturn:5922 # VNC on Saturn localForward 5923 neptune:5923 # VNC on Neptune # 128GB Compute Server host farm-jupiter hostname jupiter proxyCommand ssh host -W %h:%p user herbie identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent yes # 96GB Compute Server host farm-saturn hostname saturn proxyCommand ssh host -W %h:%p user herbie identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent yes # 64GB Compute Server host farm-netpune hostname neptune proxyCommand ssh host -W %h:%p user herbie identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent yes Subclassing Subclassing is an alternative to guests that gives more control over how the attributes are set. When you create a host that is a subclass of another host (the parent), the parent is configured to be the proxy and only the ‘user’ and ‘identityFile’ attributes are copied over from the parent, but these can be overridden locally. For example: class Jupiter(Farm): description = "128GB Compute Server" hostname = 'jupiter' tun_trusted = True remoteForward = [ ('14443 localhost:22', "Reverse SSH tunnel used by sshfs"), ] Notice, that Jupiter subclasses Farm, which was described in an example above. This generates: # 128GB Compute Server host jupiter user herbie hostname jupiter identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent no proxyCommand ssh farm -W %h:%p # 128GB Compute Server (with forwards) host jupiter-tun user herbie hostname jupiter identityFile /home/herbie/.ssh/my2014key identitiesOnly yes forwardAgent no proxyCommand ssh farm -W %h:%p remoteForward 14443 localhost:22 If you contrast this with farm-jupiter above, you will see that the name is different, as is the trusted status (farm-jupiter inherits ‘trusted’ from Host, whereas jupiter does not). Also, there are two versions, one with port forwarding and one without. Proxies Some networks block connections to port 22. If your desired host accepts connections on other ports, you can use the –ports feature described above to work around these blocks. However, some networks block all ports and force you to use a proxy. Or, if you do have open ports but your host does not accept ssh traffic on those ports, you can sometimes use a proxy to access your host. Available proxies are specified by adding PROXIES to the hosts.py file. Then, if you would like to use a proxy, you use the –proxy (or -P) command line argument to specify the proxy by name. For example: PROXIES = { 'work_proxy': 'corkscrew webproxy.ext.workinghard.com 80 %h %p', 'school_proxy': 'corkscrew sproxy.fna.learning.edu 1080 %h %p', } Two HTTP proxies are described, the first capable of bypassing the corporate firewall and the second does the same for the school’s firewall. Each is a command that takes its input from stdin and produces its output on stdout. The program ‘corkscrew’ is designed to proxy a TCP connection through an HTTP proxy. The first two arguments are the host name and port number of the proxy. corkscrew connects to the proxy and passes the third and fourth arguments, the host name and port number of desired destination. There are many alternatives to corkscrew. One is socat: PROXIES = { 'work_proxy': 'socat - PROXY:webproxy.ext.workinghard.com:%h:%p,proxyport=80', 'school_proxy': 'socat - PROXY:sproxy.fna.learning.edu:%h:%p,proxyport=1080', } Another alternative is proxytunnel: PROXIES = { 'work_proxy': 'proxytunnel -q -p webproxy.ext.workinghard.com:80 -d %h:%p', 'school_proxy': 'proxytunnel -q -p sproxy.fna.learning.edu:1080 -d %h:%p', } When at work, you should generate your ssh config file using: gensshconfig --proxy=work_proxy or: gensshconfig --Pwork_proxy You can get a list of the pre-configured proxies using: gensshconfig --available It is also possible to use ssh hosts as proxies. For example, when at an internet cafe that blocks port 22, you can work around the blockage even if your host only supports 22 using: gensshconfig --ports=80 --proxy=tunnelr or: gensshconfig -p80 --Ptunnelr Using the –proxy command line argument adds a proxyCommand entry to every host that does not already have one (except the host being used as the proxy). In that way, proxies are automatically chained. For example, in the example given above Jupiter subclasses Farm, and so it naturally gets a proxyCommand that causes it to be proxied through Farm, but Farm does not have a proxyCommand. By running gensshconfig with –proxy=tunnelr, Farm will get the proxyCommand indicating it should proxy through tunnelr, but Jupiter retains its original proxyCommand. So when connecting to jupiter a two link proxy chain is used: packets are first sent to tunnelr, which then forwards them to farm, which forwards them to jupiter. You can specify a proxy on the NetworkEntry for you network. If you do, that proxy will be used by default when on that network for all hosts that not on that network. A host is said to be on the network if the hostname is specifically given for that network. For example, assume you have a network configured for work: class Work(NetworkEntry): # Work network routers = ['78:92:4d:2b:30:c6'] proxy = 'work_proxy' Then assume you have a host that is not configured for that network (Home) and one that is (Farm): class Home(HostEntry): description = "Home Server" aliases = ['lucifer'] user = 'herbie' hostname = { 'home': '192.168.0.1', 'default': '74.125.232.64' } proxyCommand = 'socat - PROXY:webproxy.ext.workinghard.com:%h:%p,proxyport=80' class Farm(HostEntry): description = "Entry Host to Machine farm" aliases = ['mercury'] user = 'herbie' hostname = { 'work': '192.168.1.16', 'default': '231.91.164.92' } When on the work network, when you connect to Home you will use the proxy and when you connect to farm, you will not.
https://pypi.org/project/sshconfig/
CC-MAIN-2018-13
refinedweb
4,676
55.03
You use ASP.NET unit tests to test methods that are part of ASP.NET projects. You can create an ASP.NET unit test in either of two ways: By generating the ASP.NET unit test from an ASP.NET project. This is the most common scenario. By configuring an existing unit test as an ASP.NET unit test. You can also specify settings in a run configuration that correspond to the attributes used by ASP.NET unit tests. These procedures are described in the following sections. When you run ASP.NET unit tests, do not use the ClassCleanupAttribute or ClassInitializeAttribute attributes on any method in a class that contains an ASP.NET unit test. Similarly, do not use the AssemblyCleanupAttribute or AssemblyInitializeAttribute attributes in the same assembly as an ASP.NET unit test. The result of using these attributes in these situations is undefined. However, you can use the TestInitializeAttribute and TestCleanupAttribute attributes for any unit tests. Setup scripts and cleanup scripts run before and after test runs, regardless of the types of tests that are contained in those test runs. For more information about scripts that are run in conjunction with test runs, see Test Deployment Overview and How to: Specify a Test Run Configuration. To generate an ASP.NET Unit Test, you first create an ASP.NET Web site within your Visual Studio solution. You then add a class to the Web site project, and finally, generate a unit test from that class. To generate an ASP.NET unit test, start by creating an ASP.NET Web site. To do this, right-click your solution, point to Add, and then click New Web Site. On the Add New Web Site dialog box, click ASP.NET Web Site. Under Location, click File System to indicate ASP.NET Development Server. Click OK. You now have a new Web site. Add a class to this project. To do this, in Solution Explorer, right-click the Web site and then click Add New Item. In the Add New Item dialog box, click Class, and then click Add. A Microsoft Visual Studio dialog box is displayed to ask about placing the new class in the App_Code folder. Click Yes. You cannot generate tests from code in an .aspx file or in a folder other than the App_Code folder. Generate an ASP.NET unit test. If the new class file is not already open, double-click it in Solution Explorer to open it. Right-click the class in the class file, and then click Create Unit Tests. The Create Unit Tests dialog box appears. For information about how to use this dialog box to generate unit tests, see How to: Generate a Unit Test. Verify that the classes, methods, or namespaces for which you want to generate tests are selected. (Optional) Accept the default Output project or select a new project. When you are finished, click OK. The new ASP.NET unit test is added to a file in your test project. To see the unit test, open the test file and scroll to the end. The attributes that are necessary to run a unit test as an ASP.NET unit test have been automatically specified. For more information about these attributes, see the following procedure, Configuring an ASP.NET Unit Test. You can turn an existing unit test into an ASP.NET unit test by configuring it, that is, by assigning values to certain of the test's custom attributes. You set these values in the code file that contains the unit test. Before setting the custom attributes, you should first add a reference to the namespace that supports the custom attributes; this is the Microsoft.VisualStudio.TestTools.UnitTesting.Web namespace. With this reference in place, IntelliSense can help you set the attribute values. Note When you generate an ASP.NET unit test, these attributes are set automatically. Open the code file that contains the unit test. Set the following attributes for the unit test: [TestMethod] Because all unit tests require the [TestMethod] attribute, this attribute will already be in place. [UrlToTest()] This is the URL that is tested when this unit test is run; for example, [UrlToTest(“”)] [HostType()] Use [HostType(“ASP.NET”)]. Tests are typically run under the VSTest host process, but ASP.NET unit tests must run under the ASP.NET host process. Examples Example 1. If you are running your Web site with the ASP.NET Development Server, the attributes and values you set for an ASP.NET unit test might resemble the following: [TestMethod()] [HostType("ASP.NET")] [UrlToTest("")] [AspNetDevelopmentServerHost("D:\\Documents and Settings\\user name\\My Documents\\Visual Studio 2005\\WebSites\\WebSite1", "/WebSite1")] Example 2. To test a Web site running under IIS, use only the attributes TestMethod, HostType, and UrlToTest: You can specify settings in a run configuration that correspond to the attributes used by ASP.NET unit tests. After you have specified these attributes in a run configuration, the settings will apply when you run any ASP.NET unit test, whenever that run configuration is active. Only one set of settings for ASP.NET unit tests can be in effect: run configuration settings or attribute settings, but never a mixture of the two. Run configuration settings take precedence over attributes, if present. This means that even if you specify only one ASP.NET setting in run configuration, any ASP.NET settings specified as attributes are ignored. Open a run configuration file. For more information, see How to: Specify a Test Run Configuration. On the Hosts page, set the Host Type to ASP.NET. This displays additional choices, some of which correspond to attributes you can specify in code, such as Url to Test. These attributes are described in the previous procedure, "Configuring an ASP.NET Unit Test." When you have finished setting the values on the Hosts page, click Save and then click OK.
http://msdn.microsoft.com/en-us/library/ms182526(VS.80).aspx
crawl-002
refinedweb
978
69.48
On Wed, Mar 16, 2011 at 10:07:48PM +0100, Richard Weinberger wrote:> Am Mittwoch 16 März 2011, 22:04:52 schrieb Alexey Dobriyan:> > On Wed, Mar 16, 2011 at 09:52:49PM +0100, Richard Weinberger wrote:> > > Am Mittwoch 16 März 2011, 21:45:45 schrieb Arnd Bergmann:> > > > On Wednesday 16 March 2011 21:08:16 Richard Weinberger wrote:> > > > > Am Mittwoch 16 März 2011, 20:55:49 schrieb Kees Cook:> > > > > > On Wed, Mar 16, 2011 at 08:31:47PM +0100, Richard Weinberger wrote:> > > > > > > When containers like LXC are used a unprivileged and jailed> > > > > > > root user can still write to critical files in /proc/.> > > > > > > E.g: /proc/sys/kernel/{sysrq, panic, panic_on_oops, ... }> > > > > > > > > > > > > > This new restricted attribute makes it possible to protect such> > > > > > > files. When restricted is set to true root needs CAP_SYS_ADMIN> > > > > > > to into the file.> > > > > > > > > > > > I was thinking about this too. I'd prefer more fine-grained control> > > > > > in this area, since some sysctl entries aren't strictly controlled> > > > > > by CAP_SYS_ADMIN (e.g. mmap_min_addr is already checking> > > > > > CAP_SYS_RAWIO).> > > > > > > > > > > > How about this instead?> > > > > > > > > > Good Idea.> > > > > May we should also consider a per-directory restriction.> > > > > Every file in /proc/sys/{kernel/, vm/, fs/, dev/} needs a protection.> > > > > It would be much easier to set the protection on the parent directory> > > > > instead of protecting file by file...> > > > > > > > How does this interact with the per-namespace sysctls that Eric> > > > Biederman added a few years ago?> > > > > > Do you mean CONFIG_{UTS, UPC, USER, NET,}_NS?> > > > It only covers /proc/sys/net/> > Exactly.> > > > > I had expected that any dangerous sysctl would not be visible in> > > > an unpriviledge container anyway.> > > > > > No way.> > > > No way what exactly?> > Dangerous sysctls are not protected at all.> E.g. A jailed root can use /proc/sysrq-trigger.Yes, and it's suggested that you do not show it at all,instead of bloaing ctl_table.But this requires knowledge which /proc is root and which one is "root".:-(With current splitup into FOO_NS...--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2011/3/16/373
CC-MAIN-2014-15
refinedweb
349
74.9
#include "suricata-common.h" #include "stream-tcp-private.h" #include "stream-tcp.h" #include "stream-tcp-reassemble.h" #include "stream-tcp-inline.h" #include "stream-tcp-list.h" #include "util-streaming-buffer.h" #include "util-print.h" #include "util-validate.h" #include "tests/stream-tcp-list.c" Go to the source code of this file. Segment list functions for insertions, overlap handling, removal and more. Definition in file stream-tcp-list.c. Remove idle TcpSegments from TcpSession. Checks app progress and raw progress and progresses them if needed, slides the streaming buffer, then gets rid of excess segments. Definition at line 801 of file stream-tcp-list.c. References TcpSession_::client, flags, TcpStream_::flags, TcpSession_::flags, Flow_::protoctx, TcpStream_::sb, SCEnter, SCLogDebug, SCReturn, TcpSession_::server, STREAM_TOCLIENT, STREAM_TOSERVER, StreamingBufferClear(), STREAMTCP_FLAG_APP_LAYER_DISABLED, STREAMTCP_STREAM_FLAG_DEPTH_REACHED, STREAMTCP_STREAM_FLAG_DISABLE_RAW, STREAMTCP_STREAM_FLAG_GAP, STREAMTCP_STREAM_FLAG_NOREASSEMBLY, and StreamTcpReturnStreamSegments(). Definition at line 38 of file stream-tcp-list.c. In case of error, this function returns the segment to the pool Definition at line 557 of file stream-tcp-list.c. Referenced by StreamTcpUTAddSegmentWithByte(), and StreamTcpUTAddSegmentWithPayload(). compare function for the Segment tree Main sort point is the sequence number. When sequence numbers are equal compare payload_len as well. This way the tree is sorted by seq, and in case of duplicate seqs we are sorted small to large. Definition at line 49 of file stream-tcp-list.c. References TcpSegment::payload_len, TcpSegment::seq, SEQ_GT, and SEQ_LT.
https://doxygen.openinfosecfoundation.org/stream-tcp-list_8c.html
CC-MAIN-2020-29
refinedweb
232
55.3
Lazy coder's mapping of DNS prefix to a sub-directory in a web application. The Servlet specification provides a really elegant mechanism for packaging up a whole website in to a single WAR file and deploying that file as a website. Multiple websites can be mapped to different domain name prefixes, such as '' versus 'tranche.proteomecommons.org'. This blog explains a hack to map the domain prefix to a sub directory of the same web application. Why use this hack? Well, I had two good reasons. First is that when I first helped make the website we didn't put it in a proper build system. Thus it grew in to a hodgepodge of JSP, HTML, and Java code. Second, the domain prefix that we were using 'tranche.proteomecommons.org' wouldn't easily fit into its own web application because it relies on a shared in-memory database. Sure, I could invest a significant chunk of time refactoring the code and binding the database to a local, protected socket, but that'd take a lot more time than the ten minutes that it would take to hack out a URL mapping filter. Here is the filter's source-code in full. I'll discuss how it works after. package org.proteomecommons; import java.io.File;; /** * * @author Jayson Falkner - jfalkner@umich.edu */ public class TrancheRedirectFilter implements Filter { // save the filter config -- needed later FilterConfig config = null; // the domain to match String domain = "tranche.proteomecommons.org"; // uri to pre-append String preAppend = "/dev/dfs"; public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest req = (HttpServletRequest)request; HttpServletResponse res = (HttpServletResponse)response; // check the URL for the domain name String url = req.getRequestURL().toString(); if (url.contains(domain)) { // get the resource String uri = tweak(url); // forward to the resource req.getRequestDispatcher(uri).forward(req, res); } else { chain.doFilter(req, res); } } // dynamically change the URL; } public void init(FilterConfig filterConfig) throws ServletException { this.config = filterConfig; } public void destroy() { } } The code assumes that you want to map all URLs that start with a certain prefix to a sub-directory of your website. For example, '' goes to the normal website. The Tranche Project website was developed in the '/dev/dfs' folder of the website, thus by default making its URL. However, Tranche grew up quickly and we wanted to give it a proper, top level domain 'tranche.proteomecommons.org' while not breaking any of the old '/dev/dfs' links. In short, we wanted to make all URLs starting with 'tranche.proteomecommons.org' automatically go to the '/dev/dfs' folder without redirecting to a ugly URL starting with www and /dev/dfs at the end. The Filter is generic. The first two variables will swap any domain name prefix with a folder location. In this case, 'tranche.proteomecommons.org' is swapped with '/dev/dfs'. // the domain to match String domain = "tranche.proteomecommons.org"; // uri to pre-append String preAppend = "/dev/dfs"; If you want to copy/paste the above code, simply swap these variables to be the domain name and local directory of files to map to. If you are up for it, you might abstract them to variables in the Filter's web.xml declaration. Only the tweak() method of the filter merits more discussion. The other code causes the Filter to invoke or skip tweak() based on if the URL has the domain name specified. In the tweak() method, the code does the swapping.; } First, the URL is split to remove the domain name with prefix, 'tranche.proteomecommons.org' and replace it with the default domain name, ''. Next, the URL is padded to include the proper sub-directory, "/dev/dfs". Finally, the Filter forwards the request and response to the appropriate page. A little File check is made to handle a fringe case where a URLs go to a directory. That is it! If it still isn't clear why the above is handy, note that the two URLs now work exactly the same: Also, the two links go to the same exact file on the server. All of our old links work fine. All of the new links starting with 'tranche.proteomecommons.org' work. Best of all the whole hack took about 10 minutes...well, a half hour if you count writing this blog. - Login or register to post comments - Printer-friendly version - jfalkner's blog - 2944 reads by jfalkner - 2008-02-23 12:40 That is very true. If you are using a Filter for security, you can still have it apply; however, I'm not sure if the default web.xml security restrictions will apply. I'd guess not, but it would be worth checking. For the particular use case above, the tranche.proteomecommons.org portion of the site does not have any sort of security restrictions. by whartung - 2008-02-19 15:35You just need to be careful here regarding how security it set up. If you have security on /dev/dfs, the original url may well kick in the login page, whereas the new url may not. The forward may not honor the security obligation.
https://weblogs.java.net/node/239691/atom/feed
CC-MAIN-2015-35
refinedweb
849
66.44
AWS News Blog. Continuous Backup To enable this feature in the console we navigate to our table and select the Backups tab. From there simply click Enable to turn on the feature. I could also turn on continuous backups via the UpdateContinuousBackups API call. After continuous backup is enabled we should be able to see an Earliest restore date and Latest restore date Let’s imagine a scenario where I have a lot of old user profiles that I want to delete. I really only want to send service updates to our active users based on their last_update date. I decided to write a quick Python script to delete all the users that haven’t used my service in a while. import boto3 table = boto3.resource("dynamodb").Table("VerySuperImportantTable") items = table.scan( FilterExpression="last_update >= :date", ExpressionAttributeValues={":date": "2014-01-01T00:00:00"}, ProjectionExpression="ImportantId" )['Items'] print("Deleting {} Items! Dangerous.".format(len(items))) with table.batch_writer() as batch: for item in items: batch.delete_item(Key=item) Great! This should delete all those pesky non-users of my service that haven’t logged in since 2013. So,— CTRL+C CTRL+C CTRL+C CTRL+C (interrupt the currently executing command). Yikes! Do you see where I went wrong? I’ve just deleted my most important users! Oh, no! Where I had a greater-than sign, I meant to put a less-than! Quick, before Jeff Barr can see, I’m going to restore the table. (I probably could have prevented that typo with Boto 3’s handy DynamoDB conditions: Attr("last_update").lt("2014-01-01T00:00:00")) Restoring Luckily for me, restoring a table is easy. In the console I’ll navigate to the Backups tab for my table and click Restore to point-in-time. I’ll specify the time (a few seconds before I started my deleting spree) and a name for the table I’m restoring to. For a relatively small and evenly distributed table like mine, the restore is quite fast. The time it takes to restore a table varies based on multiple factors and restore times are not neccesarily coordinated with the size of the table. If your dataset is evenly distributed across your primary keys you’ll be able to take advanatage of parallelization which will speed up your restores. Learn More & Try It Yourself There’s plenty more to learn about this new feature in the documentation here. Pricing for continuous backups is detailed on the DynamoDB Pricing Pages. Pricing varies by region and is based on the current size of the table and indexes. For example, in US East (N. Virginia) you pay $0.20 per GB based on the size of the data and all local secondary indexes. A few things to note: - PITR works with encrypted tables. - If you disable PITR and later reenable it, you reset the start time from which you can recover. - Just like on-demand backups, there are no performance or availability impacts to enabling this feature. - Stream settings, Time To Live settings, PITR settings, tags, Amazon CloudWatch alarms, and auto scaling policies are not copied to the restored table. - Jeff, it turns out, knew I restored the table all along because every PITR API call is recorded in AWS CloudTrail. PITR is available in the US East (N. Virginia), US East (Ohio), starting today. Let us know how you’re going to use continuous backups and PITR on Twitter and in the comments. – Randall
https://aws.amazon.com/jp/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/
CC-MAIN-2020-34
refinedweb
577
66.23
With this tip, you can add the use of static inner classes to your bag of Java tricks. A static inner class is a class that is defined inside of a different class's definition and marked as being static. I'll show you an example of using static inner classes to add testing code to a class. Static inner classes are pretty simple in concept and implementation. Basically, you define a static class inside your primary class: public class Foo { // .... public static class Test { public static void main (String[] args) { // .... } } } In terms of adding supplementary code to your primary class, the key point is that a static inner class is compiled into a completely separate .class file from the outer class. For example, if the outer class is named Foo, then an inner class of Foo, which is named Test, would be compiled into Foo$Test.class. The separation of .class files means that you can keep the supplemental, nested code tightly coupled to the primary, outer class. They are in the same source file, and the inner class is actually inside the outer class. All that and you don't have to pay any sort of deployment or runtime cost. Score! For example, if the supplemental code is only used for, say, debugging, then you only have to ship the Foo.class file and leave the Foo$Test.class file at home. I primarily use that trick for writing example code to show how to use the primary outer class, for writing down and dirty debugging code, and for writing unit tests to automate the validation of the class's behavior. (Of course, being a diligent developer, I tend to transform the debugging code into unit tests.) Note that to execute the main() method of that Foo.Test class, you use: % java Foo$Test If you're using a command shell that uses "$" as a metacharacter, you would instead use: % java Foo\$Test Another interesting point to note is that static inner classes have, by definition, access to the outer class's protected and private fields. That is both a blessing and a curse since you can, in essence, violate the encapsulation of the outer class by mucking up the outer class's protected and private fields. Tread with care! The only proper use of that capability is to write white-box tests of the class -- since I can induce cases that might be very hard to induce via normal black-box tests (which don't have access to the internal state of the object). The XYPair class is quite simple. It just provides for an immutable pair of integers, (x, y). The XYPair.Test class has a main() method that drives a simple test of the XYPair and prints the results. Play with both the testing and core code to experiment with various problems. If you're more bold, you might want to check out the JUnit Java unit testing framework. You can uncomment out the various spots indicated in the source code and then run the tests via JUnit's test engine. Conclusion By using static inner classes, you can add additional support functionality to your systems for capabilities such as testing, while incurring no penalties in normal, production deployment. Learn more about this topic - Download the XYPair source code to this Java Tip - Read the JUnit homepage for more on JUnit - Read more about JUnit in "JUnit Best Practices," Andy Schneider (JavaWorld, Dec. 2000) - - For more Java tricks, subscribe to ITworld.com's free Java Tutor newsletter - Speak out in the Java Beginner discussion, moderated by JavaWorld author Geoff Friesen
http://www.javaworld.com/article/2077469/testing-debugging/java-tip-106--static-inner-classes-for-fun-and-profit.html
CC-MAIN-2017-26
refinedweb
605
62.07
Asked by: LinkTo predicates i was doodling some datflow networks and ran into an issue with LinkTo, predicates and the discardsMessages boolean. I had some links with the last one discardsMessages set to true, but then i added another link to another block, that block would never get any messages since the previous link was discarding them. As far as i understand there is no way to change the discardsMessages value. im not sure how i feel about having that option in the first place.. i can see the need for it, but i can also feel like it doesnt belong on the link, but rather on the source block. i suppose the question is what it is you want to have happen if nobody wants a perticular message. BufferBlock waits forever until someone who wants the message comes along, Broadcast waits until another message comes in. In both cases though, its the source that decides. i kind of wish there was a SwitchBlock to witch you could add predicates and associated targetblocks, but that also had a 'Default' source that it could post messages to that didnt match any predicate. Another, perhaps cleaner solution would be if there where If blocks that takes a T and then expose two sources also of T, one if the supplied predicate is true and another if its false. That block could have an option to decline message that doesnt match the predicate, or post them to its own False source [that could be linked wherever] if im understaning correctly, LinkTo already uses a If block of sorts, its just not public. General discussion All replies perhaps something like this: public class IfBlock<T> : ITargetBlock<T> { private Func<T, bool> _predicate = null; public ITargetBlock<T> True { get; set; } public ITargetBlock<T> False { get; set; } public bool DeclineOnFalse { get; set; } public void DeclinePermanently( ) { throw new NotImplementedException( ); } public Task CompletionTask { get { throw new NotImplementedException( ); } } public IfBlock( Func<T, bool> predicate ) { _predicate = predicate; True = new BufferBlock<T>( ); False = new BufferBlock<T>( ); } public DataflowMessageStatus OfferMessage( DataflowMessage<T> message, ISourceBlock<T> source, bool consumeToAccept ) { var match = _predicate( message.Value ); if( match ) { return True.OfferMessage( message, source, consumeToAccept ); } else { if( DeclineOnFalse ) return DataflowMessageStatus.Declined; else { return False.OfferMessage( message, source, consumeToAccept ); } } } public bool Post( T item ) { if( _predicate( item ) ) return True.Post( item ); else if( DeclineOnFalse ) return false; else return False.Post( item ); } } Hi Allan, discardMessages on LinkTo is no worse than linking a block that consumes all offered messages. Take this example for instance: var source = new BufferBlock<int>(); var sweeper = new BufferBlock<int>(); // this block accepts all offered messages var doomed = new ActionBlock<int>(x => ConsoleWriteLine(“A miracle has happened!”)); // This block will never get a message when linked after sweeper var sweeperLink = source.LinkTo(sweeper); sourceLinkTo(doomed); Console.WriteLine(“Do you believe in miracles?”); for (int i = 0; i < 1000000000; i++) source.Post(i); Console.WriteLine(“I guess not.”); Console.WriteLine(“Alright. Enjoy!”); sweeperLink.Dispose(); source.Post(1); The discardMessages option is not used, but the doomed block still doesn’t get any message as long as the sweeper remains linked, because the sweeper accepts everything. So don’t take it too hard on discardMessages – this is how the network was constructed. The moment the sweeping link is disposed of, a miracle will happen. With your “If” block, you are getting to the rationale behind LinkTo. LinkTo uses an internal “conditional synchronous propagator” block. That allows you to easily model what you want: a) forward all verified messages to verifiedTtarget, and decline all unverified messages: source.LinkTo(verifiedTarget, verification); b) forward all verified messages to verifiedTtarget, and forward all unverified messages to unverifiedTarget: source.LinkTo(verifiedTarget, verification); source.LinkTo(unverifiedTarget); c) forward all verified messages to verifiedTtarget, and discard all unverified messages: source.LinkTo(verifiedTarget, verification, discardMessages: true); As you can see you can easily model “if” forking as well as “switch” forking. Our goal is to give you a set of primitives that enable you to build the network that will solve your business problem. Zlatko Michailov Software Development Engineer, Parallel Computing Platform Microsoft Corp. If you are satisfied by this post, please mark it as “Answer”. This posting is provided "AS IS" with no warranties, and confers no rights. it is worse because you cant change or even access the discardMessages once the link has been established.. if i have an empty actionblock that effectively sinks unhandled messages, i can atleast dispose that link later without affecting the predicated block. yes, you could do that right now as well, but i think it would be cleaner to not have the discardMessages option at all. i understand that you can create if statements using the LinkTo predicate, im just opposed to how it works. it hides away very significant information about the network. that information doesnt belong on the link, it should be its own block.. its also annoying that i have to specify [and run] the predicate twice if i want to do an If/else, one for the 'true' case and one for the 'false'. i think the api would be much clearer and composable if the predicate had its own block. after all that is how it is done internally already. it is also how wf does it If I understand your first point correctly, you are suggesting removing the discardMessage option on LinkTo, because it could be simulated by linking a dummy ActionBlock. Logically, that is true. Practically, that ActionBlock will be consuming memory to buffer up messages before being able to “vaporize” them. Then it will spin up tasks that will consume CPU cycles to get those tasks scheduled, to pop buffered messages, and to invoke the dummy callback. I’m pretty sure some people will gladly opt into using discardMessages to avoid that cost. Re: “its also annoying that i have to specify [and run] the predicate twice if i want to do an If/else, one for the 'true' case and one for the 'false'” Unless the source is a BroadcastBlock, the predicate need not be invoked more than once per message. See the examples I listed above. And BroadcastBlock doesn’t make sense to be used as a source in an “if/else” fork because its sole purpose is to make message acceptance non-exclusive. Re: “the api would be much clearer and composable if the predicate had its own block. after all that is how it is done internally already” We've been discussing this. Your opinion is well taken. That doesn’t mean we are considering removing the predicate on LinkTo though. Zlatko Michailov Software Development Engineer, Parallel Computing Platform Microsoft Corp. If you are satisfied by this post, please mark it as “Answer”. This posting is provided "AS IS" with no warranties, and confers no rights. " I’m pretty sure some people will gladly opt into using discardMessages to avoid that cost" i agree -if- you use an ActionBlock :) but you could have a SinkBlock or NullBlock or something that just throws messages away in a synchronous fashion "Unless the source is a BroadcastBlock, the predicate need not be invoked more than once per message" i see now that you are correct. for some reason i had it that you'd have to do a.LinkTo(t => t.checkIfTrue) and also a.LinkTo(t => t.checkIfFalse) but that is obviously not the case.. still, i'd really like an If block :) i think it whould make networks easier to follow, and also help toolability in the future.
http://social.msdn.microsoft.com/Forums/en-US/5c36fae4-860a-4737-91a1-5d3dc31c4958/linkto-predicates?forum=tpldataflow
CC-MAIN-2014-10
refinedweb
1,253
61.87
CPython Extensions for IronPython A Proof of Concept Using Python.NET Contents - Introduction - The C Extensions for IronPython Project - Usage - Implementation Details - The Test File - Limitations - CHANGELOG Note This article describes one way to use CPython extensions from IronPython / .NET by embedding CPython itself via Python.NET. Since this article was written a much more practical approach has been implemented as an Open Source project by Resolver Systems. This project fakes Python25.dll and has a large subset of the Python C API implemented in C#. A large percentage (around a thousand tests at the time of writing) of the Numpy test suite passes when used from IronPython and other whole extension libraries are usable: Introduction This module came out of The C Extensions for IronPython Project started by Resolver Systems. This module allow you to access CPython modules from IronPython. It works by embedding a CPython interpreter, using a an assembly provided by the Python.NET Project where most of the hard work was done!. Despite some serious limitations it works! The example code included uses matplotlib [1] with numpy and Tkinter from IronPython. It provides two different usage patterns: - An import hook allowing you to import CPython binary modules with normal import statements - An Import function allowing you to import modules (pure Python or binary modules) into the embedded CPython interpreter See the usage section for examples. Download The download includes Python.Runtime.dll from Python.NET, for a UCS2 build of Python. This is the right assembly for Python 2.4 on Windows. For Linux you probably want a UCS4 build. The Python.NET distribution contains runtimes for Python 2.4 and 2.5 in both UCS2 and UCS4. This technique does depend on having the appropriate version of Python installed (although you could also ship the relevant Python dll). The source for this module lives in the FePy Subversion Repository: This code only works with IronPython 2 because of an annoying bug when passing Arrays as arguments in IronPython 1. This can be worked around, but targeting IronPython 2 may be better as we could look at using the DLR to help with making PyObjects behave like IronPython data types (see below). Bug reports, contributions and suggestions welcomed. The C Extensions for IronPython Project Resolver Systems recently announced a new project to get CPython extensions working seamlessly with IronPython. This is needed by customers of Resolver, but is being run as an Open Source project. - Announcement: Project to get some CPython C extensions running under IronPython - C Extensions for IronPython Mailing List The approach in this module, is to embed the CPython interpreter in an assembly and access CPython extensions through the hosted interpreter. This module is far from the final solution and it's not even clear that it is the best approach to take. At Resolver we have also been experimenting with directly loading CPython assemblies and replacing the CPython API with function pointers that use delegates to call back into managed code. This would also give us binary compatibility and avoid some of the problems caused by hosting a real CPython interpreter. So far we have Python binary extensions loading, calling into our code and then crashing! This is a great first step because it means the basic approach works, and know all we need to do is implement the whole Python C API in managed code... Usage The distribution includes the following files: - embedding.py: The main module - cext.py: The import hook - test.py: A simple test of the basic functionality - run this with IronPython 2 - echo.py: A test module that is imported into CPython by test.py - Python.Runtime.dll - Suitable for a UCS2 build of Python 2.4 Note Due to some bug in the Orcas Beta, the import hook doesn't work if you have any of Visual Studio 2008 betas (Orcas) installed. There are two ways of using this project. The first way is with the Import function from the embedding module. The Import Function The test.py shows this approach in action. The final part of this module generates a plot using matplotlib: pylab = Import('pylab') pylab.plot([1, 1, 1.5, 2.5, 3, 3, 3.1]) pylab.show() This generates a simple plot: As you can see, the the pylab module imported from CPython behaves in (apparently) the same way as it does when running directly in CPython. Caution! The actual code in test.py imports the sys module from the hosted interpeter, and attempts to adds to sys.path. Ufortunately this has no effect! (Although the path seems to be set correctly for my computer anyway.) The reason it doesn't work highlights something to be aware of if you want to use this module. sys.path.append('c:\\Python24\\Lib\\') In the code above, the sys module is successfully imported as a proxy object. When you access sys.path, this proxy object recognises that you are accessing a Python list (on the CPython side) and copies it across to IronPython for you. This means that the append is executed on the copy, not on the original. d'oh The solution would be, either to not copy the list and to proxy access to it as well, or to provide functions on the CPython side allowing you to manipulate sys.path. The Import Hook Installing the import hook allows you import Python binary extensions using normal import statements! Python binary extensions are .pyd files on Windows and .so files on other platforms. To install the import hook, execute the following code: cext.install() You can then do things like import cElementTree. The goal is that eventually this will be build into FePy and enabled by an option, so that you can import CPython modules without having to take any special steps. Under the hood, the import hook uses the embedding.Import function. When you use the import hook to import binary modules you may want to do things like setting the import path on the hosted interpreter. Obviously normal import statements (like import sys) will import the IronPython version. To access the builtin modules of CPython you will still need to use embedding.Import. Implementation Details The Python Runtime Assembly This provides a very thin wrapper around the CPython embedding API. You have to acquire and release the Global Interpreter Lock (GIL) around every operation and it works with PyObjects which are managed wrappers around CPython types. The code below imports the PythonEngine and initializes it. It also defines two decorators - GIL acquires and releases the GIL, and wraps all operations with CPython objects. - handle_exception handles exceptions that occur in CPython and reraises them as IronPython exceptions clr.AddReference('Python.Runtime') from Python.Runtime import PythonEngine engine = PythonEngine() engine.Initialize() def lock(): h = engine.AcquireLock() def unlock(): engine.ReleaseLock(h) return unlock def GIL(function): def f(*args, **keywargs): unlock = lock() try: ret = function(*args, **keywargs) finally: unlock() return ret return f def handle_exception(function): def f(*args, **kw): try: return function(*args, **kw) except PythonException, e: exc = PyExcConvert(PyObject(e.PyType)) value = ConvertToIpy(PyObject(e.PyValue)) raise exc(value) return f Importing modules is done with the Import function. This could be the only function you need to import from the embedding module to access CPython extensions. def Import(name): module = engine.ImportModule(name) if module is None: raise ImportError("Importing module named %s failed" % name) return PythonObject(module) Proxied Objects and Data Structures When you import a module it returns a proxied object. By default all objects you access are proxied objects unless they are a fundamental datatype - which will be converted from a PyObject into the equivalent IronPython type. Proxied objects let you get and set attributes on them, plus call them. The proxied class is PythonObject: def __init__(self, real): self._real_ = real @GIL @handle_exception def __getattribute__(self, name): real = object.__getattribute__(self, '_real_') if name == '_real_': return real ## return ConvertToIpy(real.GetAttr(name)) @GIL @handle_exception def __setattr__(self, name, value): if name == '_real_': object.__setattr__(self, '_real_', value) return ## self._real_.SetAttr(name, ConvertToPy(value)) @GIL @handle_exception def __call__(self, *args, **keywargs): if not keywargs: return ConvertToIpy(self._real_.Invoke(ConvertToPy(args))) return ConvertToIpy(self._real_.Invoke(ConvertToPy(args), StringDictToPy(keywargs))) The ConvertToPy and ConvertToIPy functions do the converting between CPython and IronPython types. It can convert integers, long integers, strings, booleans and None, lists, tuples and dictionaries. This means that if you call a Python function (or set an attribute) with a reference to an IronPython data structure it will be copied into CPython types. Any return values will be copied from CPython types to IronPython types. See the limitations section below for some of the consequences of this. It does mean that once you have imported a module you can call functions / methods and instantiate classes from inside CPython. You can also fetch and set attributes and all the right things should happen. The Test File test.py tests the basic functionality of the embedding.py module. It also serves as a usage example. When run with IronPython 2, it imports echo.py into CPython and passes data back and forth to check that it survives the journey. These are done with asserts, so if any fail then it will bomb out with an error. If it works you should see: Received into CPython: 1 Type: <type 'int'> Received into CPython: 3.2000000000000002 Type: <type 'float'> Received into CPython: u'Hello from IronPython' Type: <type 'unicode'> Received into CPython: 10000000000L Type: <type 'long'> Received into CPython: None Type: <type 'NoneType'> Received into CPython: True Type: <type 'bool'> Received into CPython: False Type: <type 'bool'> Received into CPython: [] Type: <type 'list'> Received into CPython: () Type: <type 'tuple'> Received into CPython: {} Type: <type 'dict'> Received into CPython: ({u'something': 3.2000000000000002}, u'hello', u'from', u'ironpython', [1, 2, 3]) Type: <type 'tuple'> setting something something else setting something something else fetching something fetching something Received into CPython: 123 Type: <type 'int'> Caught an exception from CPython correctly. test.py also tests other features of this module, like catching exceptions from CPython (etc). Limitations As you have probably already gathered, this is an early implementation and suffers from some serious limitations. Some basic types like complex numbers aren't converted for you. Additionally data structures are copied in and out which is expensive - and you lose changes that CPython makes to mutable data structures you pass in. The technique for copying data structures (in and out), doesn't take into account recursive data structures - and so will probably never terminate if it encounters them. For more efficient work you can leave data as PyObject structures rather than copy. One avenue of investigation would be to see if we can make these PyObject structures (managed wrappers around the CPython data) behave like their equivalent IronPython objects. This would allow some source code to operate unmodified. CPython objects (other than the basic datatypes) are proxied. This means that they have the wrong type and magic methods probably won't work. Oh, and strings from .NET come in as unicode on the CPython side. Despite these problems, as you can see from the matplotlib demo it works fine. With some CPython helper modules you may be able to solve quite difficult problems with this as a starting point. Feel free to experiment with and extend this code. If you do fix any of the problems then please send the code back to me. CHANGELOG 2007-11-08 Version 0.1.4 Added missing return statement! 2007-11-03 Version 0.1.3 Added 'cext.py' import hook (by sanixyn). Improved object proxying. Failed imports now raise an ImportError. Support for passing CPython objects back into CPython. Support for function calls that don't take keyword arguments! Improved the way CPython booleans are accessed. 2007-11-01 Version 0.1.2 Added CPython exception handling (by sanixyn). 2007-10-25 Version 0.1.1 Added support for keyword arguments. 2007-10-24 Version 0.1 Initial release. For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store. Last edited Fri Nov 27 18:32:35 2009. Counter...
http://www.voidspace.org.uk/ironpython/cpython_extensions.shtml
CC-MAIN-2018-26
refinedweb
2,033
58.08
Programming with Python Reference Analyzing Patient Data - an array. - Array indices start at 0, not 1. - Use low:highto specify a slice. Repeating Actions with Loops - Use for variable in collectionto process the elements of a collection one at a time. - The body of a for loop must be indented. - Use len(thing)to determine the length of something that contains other values. Storing Multiple Values in Lists [value1, value2, value3, ...]creates a list. - Lists are indexed and sliced in the same way as strings and arrays. - Lists are mutable (i.e., their values can be changed in place). - Strings are immutable (i.e., the characters in them cannot be changed). Analyzing Data from Multiple Files - Use glob.glob(pattern)to create a list of files whose names match a pattern. - Use *in a pattern to match zero or more characters, and ?to match any single character. Making Choices - Use if conditionto start a conditional statement, elif conditionto provide additional tests, and elseto provide a default. - The bodies of the branches of conditional statements must be indented. - Use ==to test for equality. X and Yis only true if both X and Y are true. X or Yis true if either X or Y, or both, are true. - Zero, the empty string, and the empty list are considered false; all other numbers, strings, and lists are considered true. - Nest loops to operate on multi-dimensional data. - Put code whose parameters change frequently in a function, then call it with different parameter values to customize its behavior. Creating Functions - Define a function using def name(...params...). - The body of a function must be indented. - Call a function using name(...values...). - Numbers are stored as integers or floating-point numbers. - Integer division produces the whole part of the answer (not the fractional part). - Each time a function is called, a new stack frame is created on the call stack to hold its parameters and local variables. - Python looks for variables in the current stack frame before looking for them at the top level. -). Errors and Exceptions - Tracebacks can look intimidating, but they give us a lot of useful information about what went wrong in our program, including where the error occurred and what type of error it was. - An error having to do with the “grammar” or syntax of the program is called a SyntaxError. If the issue has to do with how the code is indented, then it will be called an IndentationError. - A NameErrorwill occur if you use a variable that has not been defined (either because you meant to use quotes around a string, you forgot to define the variable, or you just made a typo). - Containers like lists and strings will generate errors if you try to access items in them that do not exist. This type of error is called an IndexError. - Trying to read a file that does not exist will give you an IOError. Trying to read a file that is open for writing, or writing to a file that is open for reading, will also give you an IOError. Defensive Programming - - Know what code is supposed to do before trying to debug it. - Make it fail every time. - Make it fail fast. - Change one thing at a time, and for a reason. - Keep track of what you’ve done. - Be humble. Command-Line Programs - The syslibrary connects a Python program to the system it is running on. - The list sys.argvcontains the command-line arguments that a program was run with. - Avoid silent failures. - The “file” sys.stdinconnects to a program’s standard input. - The “file” sys.stdoutconnects to a program’s standard output. group of instructions (i.e., lines of code) that transform some input arguments to some output. - function call - A use of a function in another piece of. - while loop - A loop that keeps executing as long as some condition is true. See also: for loop.
https://cac-staff.github.io/summer-school-2016-Python/reference.html
CC-MAIN-2022-33
refinedweb
654
68.87
Is there a way in C/C++ to put code explicitly at a specified memory adress (code to be executed)? Does that have to be shellcode? (Stack or heap doesn't matter) (In windows) Printable View Is there a way in C/C++ to put code explicitly at a specified memory adress (code to be executed)? Does that have to be shellcode? (Stack or heap doesn't matter) (In windows) A few tutorials for specific memory handling in C and C++, both Windows and nix based. edit: fixed links Thanks alot! I tried using it in its most simple way: (From the 2nd link)(From the 2nd link)Code: #include <dos.h> void pokeb (unsigned int segment, unsigned int offset, char value); int main() { pokeb (0x760F, 0x00AE, 125); return 0; } But when linking this error occurs (Borland C++ 5.5): Error: Unresolved external 'pokeb(unsigned int, unsigned int, char)' referenced from C:\WINDOWS\DESKTOP\UNTITLED.OBJ Do I have to link it with another file or something? Try: ; } That's as simple as it can get, honestly. If the problem still occurs, not sure what to say, as I am a visual .net user, being unfamiliar with borland. Yes, but here you rely on the adress of x, I want to write explicitly to an adress I chose. But thx anyway So have x == your memory address and define a variable each time? A yes /me slaps self :D lol, thanks pooh sun tzu EDIT: I created 2 proggies: Program 1: Program 2:Program 2:Code: #include <iostream.h> int i = 2; int main() { cout << &i; int stuff; cin >> stuff; return 0; } Now, what I want to do is modify the value of i in program 2.Now, what I want to do is modify the value of i in program 2.Code: #include <iostream.h> int main() { int *pointer; pointer = (int*)0x0041C178; cout << "Value1: " << *pointer << endl; /*<- this is supposed to print the value of i from program 1*/ *pointer = 3; cout << "Value2: " << *pointer; return 0; } So I run program 1 which shows the adress of where i is stored. The variable i stays in memory right? I use cin to pause the program (yes I know, stupid way but I don't know any other (yet)). Thus in program 2 I create a pointer to that adress. But whenthe it displays the value of that location it is not the value of i from program 1 which is still running. In this case it is 0 (with me). If I declare the integer in program 1 inside main() it's some number like 570577 (something like that). Anyone know what is wrong with this? EDIT2: I know this is supposed to be impossible as the kernel (should) manages memory and thus that memory adress would normally be protected as it is already in use. But these programs run without any errors and it's clearly not some form of shared memory (Win98) just out of curiousity why do you want to write to a specific address? and why not use assembly? ;) and I only mean that it's faster and more direct. but how about this i'm not at home right now i will look though my commads book for c++ and see if there is that i didn't think of. u could alway write it in assembly most c++ compiler will let u write a function in ASM(assmbly) Well Im pretty noobyish at both assembly and c++ but one of the reasons I could think of was that person wanted to execute code affecting the memory would be @ h04 or h20 and that would be for the purposes of a .com virus replicating it self. But that would be indiginious to intel family 86 cpu's only , dont know about amd. and thats not to say what hes thinking of, becuase im sure that there are other good reasons for accessing memory directly with a program in c++,like cleaning it same thing that reg edit does... :confused:
http://www.antionline.com/printthread.php?t=252971&pp=10&page=1
CC-MAIN-2017-09
refinedweb
673
70.23
beng2beng 0 Posted February 8, 2010 I am trying to get the pixelcolor of a specific location of a program thats running but not currently the active window. I have used Au3info to get the mouse position and pixel color so I know what I should be getting. It seems that the only way that I can get the color I expect is to Winactivate the window. Is this true? You can only use pixelgetcolor on Active windows only? (Without activating the program I want; it seems to retrieve the current active window's color (x/y position) $hotwin=WinGetHandle ("Zzzzz","") if $hotwin<>"" Then WinSetState("Zzzzz","",@SW_MAXIMIZE) ; i maximize to be sure my coords are always the same ;WinActivate("Zzzzz","") $hotcolor=PixelGetColor (1205,497,$hotwin) msgbox(0,hex($hotcolor),hex($hotwin),0) endif Thanks. Share this post Link to post Share on other sites
https://www.autoitscript.com/forum/topic/109664-pixelgetcolor/
CC-MAIN-2018-51
refinedweb
144
52.8
On my Windows7 x64 machine, when you lock the pointer with the browser window fullscreen on my secondary monitor, the mozMovement{X,Y} values in the generated MouseEvent events are stuck at a constant value. It doesn't seem to matter where I position my secondary monitor WRT my primary (i.e. above, below, left, right, etc). It's just pointer lock requests in the secondary monitor seem to have fairly constant mozMovement{X,Y} values. I'm not sure if this bug exists on non-windows platforms. Testing on Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/15.0 Firefox/15.0a1 STR: 1. Configure your system to have a secondary monitor. 2. Open in Firefox Nightly. 3. With the firefox window on your primary display, press the 'p' key, or click "Fullscreen with pointer lock" button. 4. Move the mouse. Observe the "Pointer: delta=(*, *) screen=(*, *) client=(*, *)" text being updated as MouseEvents are dispatched. 5. Press ESC to exit fullscreen, move the Firefox window to your secondary monitor, press 'p', or click "Fullscreen with pointer lock" button. 4. Observe that the "Pointer: ..." text doesn't update the same as when pointer locked on the primary monitor. The "delta" values (the mozMovement{X,Y} values) are fairly constant. On my linux box when I lock the pointer on my secondary display (following the STR in comment0) the mouse jumps to appear as visible on my primary, but remains hidden on my secondary display, and I still get incoherent mozMovement{X,Y} values. Also note that the mozMovement{X,Y} values are coherent when in fullscreen move on the secondary monitor, it's only once the pointer is locked that they start reporting bogus values. So when I change nsIntPoint nsEventStateManager::GetMouseCoords(nsIntRect aBounds) to just return (innerWidth/2, innerHeight/2), I get what looks like correct behaviour, and the mochitests still pass... Note aBounds has negative values in some fields in the secondary monitor case, whereas in the primary monitor case they're all >=0. Humph: Why do you need to pass the bounds into this function? And why is innerHeight used in the calculation but innerWidth isn't? It looks like SynthesizeNativeMouseEvent takes coords relative to the window/widget's top-left corner, rather than in system pixel coords, at least on windows it looks like it does: Should nsIntPoint nsEventStateManager::GetMouseCoords() just be returning (innerWidth/2, innerHeight/2)? Created attachment 633389 [details] [diff] [review] Patch 1 v1: Coords relative to widget, not screen. The problem in brief is that we're using the widget's bounds in screen coords to calculate the mouse position and movement deltas, but the mouse/dom events and widget synthetic dispatch always assume the coords are in widget coords (i.e. offset from the widget origin, not the primary screen's origin). Specifically, nsEventStateManager::GetMouseCoords(bounds) is being used to calulate the center of the window containing the mouse event, but GetMouseCoords() is passed screenBounds (bounds in screen coords) and using that to seed its calculations causing the calculated mouse position to be in screen coords. We're dispatching synthetic mouse events to the screen coords, but the synthetic dispatch expects widget coords, causing the problem. Changes in this patch: * Change the window center calculation to return an offset relative to the widget's origin. This fixes the base problem. * Remove the "if (aEvent->refPoint == aEvent->lastRefPoint) aEvent->refPoint = sLastRefPoint;" case in nsEventStateManager::GenerateMouseEnterExit. In my testing this branch was only taken when aEvent->refPoint == sLastRefPoint, so it has no effect. * Save the mouse position in widget coords before going into pointer lock in nsEventStateManager::mPreLockPoint (previously it was saved in screen coords). * Synthesize mouse move to nsEventStateManager::mPreLockPoint when exiting, so mouse returns to same position upon pointer lock exit (previously mPreLockPoint was write only, and sLastScreenPoint was used for this purpose, except sLastScreenPoint is in screen coords, so it would behave incorrectly on secondary monitors). * Set nsEventStateManager::sLastRefPoint before entering/exiting pointer lock so that the synthetic mouse move fired when entering/exiting doesn't report movement as ((mouse position before lock) - (center of window)) as it did previously. (If that's not learn, play with and you should see what I mean) Created attachment 633393 [details] [diff] [review] Patch 2 v1: Add comments and a little cleanup. While figuring out how pointer lock worked I added comments, removed some unused code, and simplified nsDOMUIEvent::GetMovementPoint(). This should make it easier to understand the pointer lock code in future.. Comment on attachment 633389 [details] [diff] [review] Patch 1 v1: Coords relative to widget, not screen. >- aEvent->widget->SynthesizeNativeMouseMove(aEvent->lastRefPoint); >+ nsIntPoint center = GetWidgetCenter(aEvent->widget); >+ aEvent->lastRefPoint = center; >+ // If this mouse move doesn't finish at the center of the widget, >+ // dispatch a synthetic mouse move to return the mouse back to the >+ // center. >+ if (aEvent->refPoint != center) { >+ aEvent->widget->SynthesizeNativeMouseMove(center); > } Why do we want to center in widget, and not in the web page? If browser chrome takes lots of space, the center can be over chrome. Created attachment 634254 [details] [diff] [review] Patch 1 v2: Coords relative to widget, not screen. Created attachment 634256 [details] [diff] [review] Patch 1 v3: Coords relative to widget, not screen. Ooops, some of the changes from Patch 2 made it into this one. The new patch finds the center of the inner content area, rather than the center of the outer window/widget. Created attachment 634259 [details] [diff] [review] Patch 2 v2: Add comments and a little cleanup. Rebased on new patch 1. Comment on attachment 634256 [details] [diff] [review] Patch 1 v3: Coords relative to widget, not screen. >+// Returns the center point of the window's inner content area. >+// This is in widget coordinates, i.e. relative to the widget's top >+// left corner, not in screen coordinates. >+static nsIntPoint >+GetWindowInnerRectCenter(nsPIDOMWindow* aWindow, >+ nsIWidget* aWidget, >+ nsPresContext* aContext) >+{ >+ NS_ENSURE_TRUE(aWindow != nsnull, nsIntPoint(0,0)); >+ NS_ENSURE_TRUE(aWidget != nsnull, nsIntPoint(0,0)); >+ NS_ENSURE_TRUE(aContext != nsnull, nsIntPoint(0,0)); NS_ENSURE_TRUE(aWindow && aWidget && aContext, nsIntPoint(0,0)); .) Not needed for b2g. (In reply to Olli Pettay [:smaug] from comment #11) > .) Ah yes, my mistake. You're correct, I was converting to dev pixels, not CSS pixels. I'll fix that. The change I'm trying to make here is to remove the unnecessary conversion from widget offset to screen coords. Let me update the patch again... Created attachment 634661 [details] [diff] [review] Patch 2 v3: Add comments and a little cleanup. Since it's tagged as a games P2, we are not going to track this. Can we are get this on Aurora (fx15)? Comment on attachment 634256 [details] [diff] [review] Patch 1 v3: Coords relative to widget, not screen. [Approval Request Comment] Bug caused by (feature/regressing bug #): Mouse/pointer lock (bug 633602) User impact if declined: Pointer lock reports incorrect mouse movement values when the mouse is locked on a secondary monitor. Games etc won't work. FF15 is our big bang HTML5 Games Are Awesome Release, so it would be good to get this in FF15. Testing completed (on m-c, etc.): Local testing, and it's been on m-c for weeks. Risk to taking this patch (and alternatives if risky): Pretty low risk. String or UUID changes made by this patch: None. Comment on attachment 634256 [details] [diff] [review] Patch 1 v3: Coords relative to widget, not screen. [Triage Comment] Low risk, and in support of a big bang :). Approved for Aurora 15. Target Milestone is for m-c. This is fixed in mozilla 15, so why shouldn't the target milestone be 15? Or is target milestone supposed to be for the release in which it first was pushed to m-c? (In reply to Chris Pearce (:cpearce) from comment #23) > Or is target milestone supposed to be for the release in which it first was > pushed to m-c? Yes.
https://bugzilla.mozilla.org/show_bug.cgi?id=756936
CC-MAIN-2017-09
refinedweb
1,315
56.66
I don't get many of these "I have code that doesn't work" requests. But I do see them once in a great while. It might be something like the following two-part explanation with a following question. I have this code from base64 import b64encode def some_func(message): msg = b64encode(message) msg = some_func(b'hello world') print(f"padding = {msg.count(b'=')}") I'm getting this error. Traceback (most recent call last): File "/Users/slott/miniconda3/envs/CaseStudy39/lib/python3.9/site-packages/IPython/core/interactiveshell.py", line 3437, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-18-d54347890e97>", line 7, in <module> print(f"padding = {msg.count(b'=')}") AttributeError: 'NoneType' object has no attribute 'count' What can I do? <rant>As a personal note, I'm extremely grumpy when I get this in the form of a screen picture. I cannot work with images of code. It's really important to present code as text. Not a picture of text. </rant> There are two kinds of answers to this question. 1. It's obvious (to me) what's wrong. While I can say what the problem is likely to be, that doesn't help the questioner. 2. The questioner needs a strategy for getting to working software. This, of course, can piss off some people because they insist all questions have simple answers and I'm just being unhelpful by giving them a bunch of steps they're supposed to follow. I'm going to stick to answers of the second kind. I don't provide answers of the first kind. The Two General Answers There are two general answers of the second kind. - Use the debugger. - Add print(). I'm told the debugger can be fun to use. I'm not skilled in using it, so I don't generally recommend it. I find it difficult to uncover state change using the debugger. It's great for exploring a data structure. Adding print() is something I find easier and more useful. Add print() Here's what folks can do to uncover a problem. This is all we ever need to do. There are no weird other cases or complex situations where this doesn't work. The procedure for adding print() works like this: - Find the line with the error. In the example, it's the final print(). - Look at all the variables. In this case, there's only one, msg. - Put a print()in front to show the values of all the variables. print(f"{msg=}"). This will reveal that msg is None after the assignment statement. Now we have to look at the function, some_func(), that creates the value for msg. We'll start from the end of this function and work forward, following the above three-step procedure faithfully. And recursively. Eventually, we'll uncover the problem. It may not be blazingly obvious, but we will, without fail, find a missing state change or an unexpected state change. (In this case, it's missing.). This is the only answer I can ever give to "why doesn't my code work?" question: Add print().
https://slott-softwarearchitect.blogspot.com/2021/08/i-have-code-that-didnt-work-what-now.html
CC-MAIN-2021-39
refinedweb
522
77.53
In this tutorial, we are going to explore how to convert Python List of objects to CSV file. Convert Python List Of Objects to CSV: As part of this example, I am going to create a List of Item objects and export/write them into CSV file using the csv package. Recommended: How to read data from CSV file in Python Convert List Of Objects to CSV: - Creating an Item class. - Prepare a list of Item objects - Create items.csv file - write Items data into items.csv file list_to_csv.py import csv class Items(object): def __init__(self, id, name, category): self.__id = id self.__name = name self.__category = category self.__index = -1 @property def id(self): return self.__id @property def name(self): return self.__name @property def category(self): return self.__category if __name__ == '__main__': filename = 'items.csv' items = [Items(100, 'iPhone 10', 'Mobiles'), Items(200, 'Lenovo', 'Laptops') , Items(300, 'Java in Action', 'Books'), Items(400, 'Python', 'Books')] try: with open(filename, 'w') as f: writer = csv.writer(f) for item in items: writer.writerow([item.id, item.name, item.category]) except BaseException as e: print('BaseException:', filename) else: print('Data has been loaded successfully !') Code Walkthrough: - Created an Itemclass with id, name and category properties and created constructor. - Created a List of Item objects. - Open items.csvfile with write permissions. - Iterating the Items list and write each item to the file. - Create CSV writer, writer.writerow()function used to write a complete row with the given parameters in a file. Output: Data has been loaded successfully ! items.csv 100,iPhone 10,Mobiles 200,Lenovo,Laptops 300,Java in Action,Books 400,Python,Books References: Happy Learning 🙂
https://www.onlinetutorialspoint.com/python/how-to-convert-python-list-of-objects-to-csv-file-example.html
CC-MAIN-2020-50
refinedweb
279
53.47
All these keywords are part of the main method of any C# program. The Main method, which is the entry point for all C# programs states that what a class does when it executed. using System; class Demo { static void Main(string[] args) { Console.WriteLine("My first program in C#!"); } } public − This is the access specifier that states that the method can be accesses publically. static − Here, the object is not required to access static members. void − This states that the method doesn’t return any value. main − is As stated above, it s the entry point of a C# program i.e. this method is the method that executes first.
https://www.tutorialspoint.com/What-is-the-difference-between-public-static-and-void-keywords-in-Chash
CC-MAIN-2021-04
refinedweb
111
76.42
3 Jul 03:27 2004 Re: Minimum MetaClass override to get/set custom properties <jastrachan@...> 2004-07-03 01:27:35 GMT 2004-07-03 01:27:35 GMT On 29 Jun 2004, at 19:14, Andy Depue wrote: > These are actually Java classes that I will be passing into groovy. > Will > Groovy use these two methods for property access if they are in a > normal > (non-groovy) Java class? I'm fairly sure that if you write a Java class like this... public class MyClass { public Object get(String name) { // return something } public void set(String name, Object value) { ... } } then it'll just work. AFAIK this works today and has done for some time - I tried to leave this hook inside the MetaClass in case anyone had any DynaBean type object or provided any Map-like random access methods. We need to enhance the MetaClass to make it easier to map which random access methods should be used to lookup dynamic properties in objects. James -------
http://permalink.gmane.org/gmane.comp.lang.groovy.user/1740
CC-MAIN-2016-22
refinedweb
166
69.52
On Sun, Jan 10, 2010 at 09:55:50PM -0500, Vitor Sessak wrote: > M?ns Rullg?rd wrote: >> Vitor Sessak <vitor1001 at gmail.com> writes: >>> +#if !HAVE_LOG2F >>> +static av_always_inline av_const float log2f(float x) >>> +{ >>> + return log(x) * 1.44269504088896340736; >>> +} >>> +#endif /* HAVE_LOG2F */ >> Calling log2() should be better. Most of the bad libs have a full set >> of double-precision functions. It's the single-precision ones, which >> were added in C99 that are often missing. > > Fine, even thought NetBSD lacks it (thats why we have a configure check for > it, so no problem there). > >>> +... > > -Vitor > configure | 4 ++++ > libavutil/internal.h | 14 ++++++++++++++ > 2 files changed, 18 insertions(+) > f25bb41b563db2bd35239ca070cb257b2967566f fix_mingw3.diff parts i maintain: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-January/086673.html
CC-MAIN-2016-36
refinedweb
111
71
Ah yes, I see what you mean: class Test(): x = 1 print (x) # Prints 1 print([x+i for i in range(1,3)]) # NameError (x) Anyway, I apologise for posting to Python-Dev on was a known issue, and turned out to be more me asking for help with development with Python, rather than development of Python. (My original use case was a scripting language that could contain embedded Python code). Thanks to Nick for his original answer. Rob Cliffe On Mon, Jun 11, 2018 at 3:10 PM Rob Cliffe via Python-Dev <python-dev@python.org> wrote: Skip,. Example: def Test(): x = 1 print([x+i for i in range(1,3)]) # Prints [2,3] exec('print([x+i for i in range(1,3)])') # Raises NameError (x) Test() I (at least at first) found the difference in behaviour surprising. Change 'def' to 'class' and run it again. You'll be even more surprised.
https://mail.python.org/archives/list/python-dev@python.org/message/3UVBOHPWR4IKOUMUKMWKDCD3GAZAOAAF/attachment/2/attachment.htm
CC-MAIN-2022-33
refinedweb
157
79.6
#include <math.h> These functions round x to the nearest integer value that is not larger in magnitude than x. These functions return the rounded integer value, in floating format. If x is integral, infinite, or NaN, x itself is returned. For an explanation of the terms used in this section, see attributes(7). The integral value returned by these functions may be too large to store in an integer type (int, long, etc.). To avoid an overflow, which will produce undefined results, an application should perform a range check on the returned value before assigning it to an integer type.
http://manpages.courier-mta.org/htmlman3/trunc.3.html
CC-MAIN-2021-17
refinedweb
101
63.19
Python Inheritance Python Inheritance Inheritance allows us to define a class that inherits all the methods and properties from another class. Parent class is the class being inherited from, also called base class. Child class is the class that inherits from another class, also called derived class. Create a Parent Class Any class can be a parent class, so the syntax is the same as creating any other class: Example Create a class named Person, with firstname and lastname properties, and a printname method: def __init__(self, fname, lname): self.firstname = fname self.lastname = lname def printname(self): print(self.firstname, self.lastname) #Use the Person class to create an object, and then execute the printname method: x = Person("John", "Doe") x.printname() Create a Child Class To create a class that inherits the functionality from another class, send the parent class as a parameter when creating the child class: Example Create a class named Student, which will inherit the properties and methods from the Person class: pass Note: Use the pass keyword when you do not want to add any other properties or methods to the class. Now the Student class has the same properties and methods as the Person class. Example Use the Student class to create an object, and then execute the printname method: x.printname() Add the __init__() Function So far we have created a child class that inherits the properties and methods from its parent. We want to add the __init__() function to the child class (instead of the pass keyword). Note: The __init__() function is called automatically every time the class is being used to create a new object. Example Add the __init__() function to the Student class: def __init__(self, fname, lname): #add properties etc. When you add the __init__() function, the child class will no longer inherit the parent's __init__() function. Note: The child's __init__() function overrides the inheritance of the parent's __init__() function. To keep the inheritance of the parent's __init__() function, add a call to the parent's __init__() function: Example def __init__(self, fname, lname): Person.__init__(self, fname, lname) Now we have successfully added the __init__() function, and kept the inheritance of the parent class, and we are ready to add functionality in the __init__() function. Use the super() Function Python also has a super() function that will make the child class inherit all the methods and properties from its parent: Example def __init__(self, fname, lname): super().__init__(fname, lname) By using the super() function, you do not have to use the name of the parent element, it will automatically inherit the methods and properties from its parent. Add Properties Example Add a property called graduationyear to the Student class: def __init__(self, fname, lname): super().__init__(fname, lname) self.graduationyear = 2019 In the example below, the year 2019 should be a variable, and passed into the Student class when creating student objects. To do so, add another parameter in the __init__() function: Example Add a year parameter, and pass the correct year when creating objects: def __init__(self, fname, lname, year): super().__init__(fname, lname) self.graduationyear = year x = Student("Mike", "Olsen", 2019) Add Methods Example Add a method called welcome to the Student class: def __init__(self, fname, lname, year): super().__init__(fname, lname) self.graduationyear = year def welcome(self): print("Welcome", self.firstname, self.lastname, "to the class of", self.graduationyear) If you add a method in the child class with the same name as a function in the parent class, the inheritance of the parent method will be overridden.
https://www.w3schools.com/python/python_inheritance.asp
CC-MAIN-2021-39
refinedweb
600
60.95
import "cuelang.org/go/internal/str" Package str provides string manipulation utilities. Contains reports whether x contains s. FoldDup reports a pair of strings from the list that are equal according to strings.EqualFold. It returns "", "" if there are no such strings. HasFilePathPrefix reports whether the filesystem path s begins with the elements in prefix. HasPath reports whether the slash-separated path s begins with the elements in prefix. SplitQuotedFields splits s into a list of fields, allowing single or double quotes around elements. There is no unescaping or other processing within quoted fields. StringList flattens its arguments into a single []string. Each argument in args must have type string or []string.. Package str imports 6 packages (graph). Updated 2020-09-18. Refresh now. Tools for package owners.
https://godoc.org/cuelang.org/go/internal/str
CC-MAIN-2020-40
refinedweb
128
61.33
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. While trying to post comments in an particular ISSUE (for some others it works) i get "Communication Breakdown" problem. The only way to post a comment in such issues is to post the comment while resolving the issue and then i need to reopen. Sometimes it helps if i remove temporary the issue from Sprint - but not always. I've already checked but did not helped. I get no error in attlassian-jira.log No error in apache error log. in access log I get a: xx.xx.xx.xx - - [16/Jul/2015:10:49:48 +0200] "POST /rest/api/2/issue/OPERATIONS-2914/comment HTTP/1.1" 400 110 xx.xx.xx.xx - - [16/Jul/2015:10:49:49 +0200] "GET /rest/quickwatch/1.0/notification/count/unread?_=1437036593681 HTTP/1.1" 200 27 xx.xx.xx.xx - - [16/Jul/2015:10:49:49 +0200] "GET /rest/scriptrunner/1.0/message?_=1437036593862 HTTP/1.1" 204 - Any error in the browser's console ? Is there any proxy between you and JIRA? The 400 (bad req) should be investigated a bit more.... In the meantime, get some LedZep: Communication breakdown / It's always the same / I'm having a nervous breakdown / Drive me insane No proxy. Just apache between me and jira. In Firefox: "submitting form:" batch.js:18616:199 "JIRA.Events.NEW_CONTENT_ADDED: " batch.js:204:328 Object { type: "newContentAdded", timeStamp: 1437039580518, jQuery172012567291339557074: true, isTrigger: true, exclusive: undefined, namespace: "", namespace_re: null, result: undefined, target: HTMLDocument → OPERATIONS-2914, delegateTarget: HTMLDocument → OPERATIONS-2914, 3 more… } batch.js:204:328 "[behaviors] new content added, clearing outstanding errors..." batch.js:18640:218 in Chrome: submitting form: POST 400 (Bad Request) JIRA.Events.NEW_CONTENT_ADDED: d.Event {type: "newContentAdded", timeStamp: 1437039703447, jQuery17204086012684274465: true, isTrigger: true, exclusive: undefined…} [behaviors] new content added, clearing outstanding errors... polling for notifications.
https://community.atlassian.com/t5/Jira-questions/JIRA-v6-4-1-Communication-Breakdown-while-posting-comments/qaq-p/314480
CC-MAIN-2018-17
refinedweb
325
60.92
Data structure for complexe enumeration. Project description Catalog is a data structure for storing complex enumeration. It provides a clean definition pattern and several options for member lookup. Supports Python 2.7, 3.3+ Install pip install pycatalog Usage from catalog import Catalog class Color(Catalog): _attrs = 'value', 'label', 'other' red = 1, 'Red', 'stuff' blue = 2, 'Blue', 'things' # Access values as Attributes > Color.red.value 1 > Color.red.label 'Red' # Call to look up members by attribute value > Color('Blue', 'label') Color.blue # Calling without attribute specified assumes first attribute defined in `_attrs` > Color(1) Color.red Attributes _attrs: Defines names of attributes of members. (default: ['value']) _member_class: Override the class used to create members. Create a custom member class by extending CatalogMember. Methods _zip: Return all members as a tuple. If attrs are provided as positional arguments, only those attributes will be included, and in that order. Otherwise all attributes are included followed by the member name. > Color._zip() (('red', 1, 'Red', 'stuff'), ('blue', 2, 'Blue', 'things')) > Colot._zip('value', 'label') ((1, 'Red'), (2, 'Blue')) Changelog 1.2.0 - Add support for Python 2. (Wrong direction. I know) 1.1.1 - Add _zip method 1.0.0 - Initial build and packaging Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pycatalog/1.2.0/
CC-MAIN-2018-22
refinedweb
229
52.97
hey all-- i'm new to c++ (as the difficulties below will undoubtedly prove) and would GREATLY appreciate any/all help i can get... i'm trying to set up and manipulate several linked lists, each containing structs as their elements. so -> #include <list> using namespace std; struct Process { string PID; int arrivalTime; .... }; list <Process> newQ; list <Process> readyQ; list <Process> diskQ; list <Process> runQ; list <Process> exitQ; list<Process>::iterator newQPtr; list<Process>::iterator readyQPtr; list<Process>::iterator diskQPtr; list<Process>::iterator runQPtr; list<Process>::iterator exitQPtr; the code compiles fine...first I populate the newQ with Processes, and that works... but i get indistinguishable run-time errors when i try something akin to for(newQPtr=newQ.begin(); newQPtr!=newQ.end(); newQPtr++) { if(newQPtr->arrivalTime == time) { readyQ.push_back(*newQPtr); newQ.erase(newQPtr); } } (basically trying to take a Process struct off the newQ and place it on the readyQ) does anyone have any idea what i'm doing wrong here? is the problem with the iterator? if so, what kind of pointer should i be using? i need to be able to iterate through the various lists, taking nodes off one list and placing them on another. thanks in advance~
http://cboard.cprogramming.com/cplusplus-programming/14140-cant-figure-structs-linked-list-printable-thread.html
CC-MAIN-2015-35
refinedweb
200
56.86
Nicola Ken Barozzi escribió: > > I'm not able to see the project and group logos of the test site > generated using Batik. Commenting out the DTD from the skinconf.xml file > fixes it. > > Any idea about how to fix it? > It is all related to the fact that a context is not created anymore, since there is a problem to access to previous version to the SVN, I am kind of stuck with this problem. > I don't like the old one as it "pollutes" the svg with extra namespaces. > In essence, this is what happens: the file is passed through this > strylesheet: > But move this from "for" to skinconf is trivial. > > We could simply use the xslt.svg system and deprecate the <for:> usage, > but I'm not sure. > I think that we have to issues here: One is the use of for: and skinconf: namespaces. +1 as this is not important, it could be foo if we wish.. > > What changes is the fact that now the file is enclosed in two other tags > and the use of value-of instead of <for:>, and the for namespace is no > more > I think that this mix things, in the old way you have the content in one file. The for: namespace would be ignore unless you use it within forrest. Now you are mixing this data with code (XLST).
http://mail-archives.apache.org/mod_mbox/forrest-dev/200406.mbox/%3C40C05A46.7020609@che-che.com%3E
CC-MAIN-2014-15
refinedweb
229
89.18
Re: RHN Update From: Michael Schwendt (fedora_at_wir-sind-cool.org) Date: 02/16/05 - Previous message: Zhang Lei: "Re: Samba Server Problem." - In reply to: Brad Carpenter: "RHN Update" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Date: Wed, 16 Feb 2005 13:38:14 +0100 To: carpenter@fbtelco.com, General Red Hat Linux discussion list <redhat-list@redhat.com> On Tue, 15 Feb 2005 21:41:02 -0600, Brad Carpenter wrote: > OK, basic question here, we are running RHEL3 AS using it for ISP functions, > last time I connected to RHN it wanted to download a caching-nameserver, > which I did, which took over and made our secondary DNS server "fall off the > map". > > We recovered fairly easily, albeit very nervously, and wish NOT to run into > such problem again. > > Question is "Are there updates that we should ignore on a system like this?" > And if so, how does one go about picking and choosing what to skip and what > to keep? > > I see posts that say "update everything" all the time, is this the way it is > expected to work? Yes. Precisely, get the security/bug-fix updates for all packages you have installed already. Be careful when adding new packages to your installation which don't replace existing old packages. The caching-nameserver package you installed was not present on your system before and hence overwrote existing config files with its new contents. Probably just /etc/named.conf and local zone files in /var/named. Once installed, the config files within RPM packages are protected in two ways against being overwritten during updates. Depending on how an RPM package is configured, an update either creates *.rpmsave backup files of existing config files or stores its new config files in *.rpmnew where you can review them before copying any new default settings to your existing config files. Where you can choose a custom name for config files (e.g. zone files in in /var/named) choose a good namespace that's unlikely to be used by RPM packages. -- redhat-list mailing list unsubscribe mailto:redhat-list-request@redhat.com?subject=unsubscribe - Previous message: Zhang Lei: "Re: Samba Server Problem." - In reply to: Brad Carpenter: "RHN Update" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
http://linux.derkeiler.com/Mailing-Lists/RedHat/2005-02/0274.html
CC-MAIN-2015-11
refinedweb
374
64.91
Assertion statements are a good start to ensuring our programs are being tested, but they don’t necessarily tell us what we should test. Generally, we can start by testing the smallest unit of a program. For example, in the real world, if we were testing the functionality of a door, we could test a multitude of units. The handle could be an example of a single unit that we must check to make sure a door functions, followed by the hinges and maybe even the lock. In programming, these types of individual tests are called unit tests. Like our door handle, we can test a single unit of a program, such as a function, loop, or variable. A unit test validates a single behavior and will make sure all of the units of a program are functioning properly. Let’s say we wanted to test a single function (a single unit). To test a single function, we might create several test cases. A test case validates that a specific set of inputs produces an expected output for the unit we are trying to test. Let’s examine a test case for our times_ten() function from the previous exercise: # The unit we want to test def times_ten(number): return number * 100 # A unit test function with a single test case def test_multiply_ten_by_zero(): assert times_ten(0) == 0, 'Expected times_ten(0) to return 0' Great, now we have a simple test case that validates that times_ten() is behaving as expected for a valid input of 0! We can improve our testing coverage of this function by adding some more test cases with different inputs. A common approach is to create test cases for specific edge case inputs as well as reasonable ones. Here is an example of testing two extreme inputs: def test_multiply_ten_by_one_million(): assert times_ten(1000000) == 10000000, 'Expected times_ten(1000000) to return 10000000 def test_multiply_ten_by_negative_number(): assert times_ten(-10) == -100, 'Expected times_ten(-10) to return -100' Now we have several test cases for a wide variety of inputs: a large number, a negative number, and zero. We can create as many test cases as we see fit for a single unit, and we should try to test all the unique types of inputs our unit will work with. Now, let’s create a variety of unit tests for another feature of Small World Air. Instructions At Small Air World, every plane seat has a monitor which displays the nearest emergency exit. This monitor relies on a function called get_nearest_exit(), which takes a row number and then returns an exit location depending on where the row is. Let’s make sure our function is working properly by creating a unit test. Create a function called test_row_1() that will host a test case. Inside the function, assert that a call of get_nearest_exit(1) should equal to 'front', along with a message, 'The nearest exit to row 1 is in the front!'. Create another test case function called test_row_20(). Inside the function, call get_nearest_exit(20) and assert that the return value is equal to middle, along with the message, 'The nearest exit to row 20 is in the middle!' Finally, create another test case function called test_row_40(). Inside the function, call get_nearest_exit(40) and assert that the return value is equal to 'back', along with the message, 'The nearest exit to row 40 is in the back!' At the bottom of the file, call each of the three test functions we created. What would be the expected output? Looks like our tests caught a logic error in our function get_nearest_exit()! If the row number is larger than 30, we actually want to return 'back'. Adjust the function and fix the error so all of our tests pass (we should see no output).
https://www.codecademy.com/courses/learn-intermediate-python-3/lessons/int-python-unit-testing/exercises/unit-testing
CC-MAIN-2021-39
refinedweb
626
67.69
UnNews talk:Heartless bastards make HILARIOUS Steve Irwin-based UnNews story From Uncyclopedia, the content-free encyclopedia The original "crikey for steve irwin" was done in taste. It never says it was his own fault that he died. And there was even a concolences part in the unnews article. This is a parody site for poop's sake...--Kingkitty 21:03, 4 September 2006 (UTC) - Actually the original Steve Irwin news item was this one which for some reason got moved. I also hoped that it would be seen as quite tasteful, given that I am a fan of the late Mr Irwin. - ACTUALLY... there were at least four of them, and a buttload of complaints about them. This article was seen as a fair middle-ground, given that it indeed acknowledges that the incident is funny and worthy of an article, but that it might be a little early to make the jokes. I agree with you that not all of the articles were tasteless, but many were, and all were seen as offensive by many of the Aussie readers we have here. If anyone needs any articles restored, I'll be more than happy to do so, and put them into your namespace until such time as it's deemed "the right time." (For a hint of the kind of comments we were getting, see Talk:Steve Irwin.) Thank you for putting up with all of this, and please "throw another stingray on the barbie"understand that when we take an article down for being in bad taste and replace it with an article about how much of a heartless bastard you are for writing it, it's not at all personal. No really, it isn't: regardless of what the title and the content say, we're just trying to appease the widely offended masses by noting how cruel making light of a famous person's death really can be (despite the fact it's both commonplace and prevalent on the internet Today). - Sorry if my article about how offensive your relatively non-offensive article was offended you. Some people are just touchy, you know?--<< >> 22:23, 4 September 2006 (UTC) - Eh, actually, can you point out exactly where the other one was offensive? All I see are random IPs spouting crap that they would do whatever the content of the article was. And since when exactly is offensiveness a valid reason for deletion on uncyclopedia? I'm sure cancer porn, fisher price and niggers all cause a lot more offense. And how do you and whoever else get to decide that "THIS will be the only Steve Irwin article for a while"? Sorry, I'm sure everyone knows by now how much I don't like these things being discussed where not everyone can see them. Care to post the discussion here? And seeing as I wasn't part of this discussion, here's my opinion: I'm very much against this being the only Steve Irwin news story. - I'm sure if you looked a little closer it's been metioned by the main editors of it that they were all fans of Irwin, so anything written is not done with the intent to ridiclue of make fun of him. - Oh, and I'm pretty sure that me and David Gerard were the only admins to add to those pages, so I'm hoping those comments about the admin in this weren't about me. • Spang • ☃ • talk • 03:08, 5 September 2006 (UTC) - 10 Take a Chill Pill - 20 Print "Still offended?"; INPUT $ - 30 If $ = "Yes", goto 10 - 40 End - There's nothing sexier that people getting offended at being lampooned for being offensive: turns me right on..--<< >> 03:54, 5 September 2006 (UTC) - Look, there's no reason at all to bring MoneySign into this. —rc (t) 03:56, 5 September 2006 (UTC) - Who ever said I was offended? And to reply to your edit summary: As far as I'm concerned, IRC discussions do not, and will never be valid for deciding consensus of the wiki. That's what discussion pages are for. If you want consensus, aks for it on the wiki, where everyone can see. • Spang • ☃ • talk • 04:05, 5 September 2006 (UTC) OK, it would appear that there are two discussions giong on here, which are linked and might not need to be. How about this: edit Is my article offensive? No. If the answer to this is "yes", the universe will implode through paradox.--<< >> 04:25, 5 September 2006 (UTC) - No. Was it funny? Meh. Were you getting a bit ahead of yourself declaring that yours should be the only one? Yes. • Spang • ☃ • talk • 04:40, 5 September 2006 (UTC) - Eh, I could live with it not being funny. Humor was it's main purpose, but as long as it served its secondary purpose to hold off the Irwin jokes, if only for a little while, I'm happy. And it did. Again, I didn't move or delete anything, nor did I bring up the idea of getting rid of the other articles, though to be honest I thought that if people were as horribly offended as they appeared to be, I thought it was a very good idea, and would hope I would have thought so if it was someone else's article that was telling others to hold off on the jokes. As that wasn't the case, of course I can't say what I would have done in that instance. - I still think that using callous indifference to mock callous indifference is the perfect way to mock callous indifference, which was present at Uncyclopedia, if not in the main article, then certainly in the one that had a picture of Irwin making love to a stingray, to name the first one that comes to mind. Also, just so we're clear, I didn't have ANY admins other than myself in mind when I wrote the bit about the evil admin. It's merely a stereotype, and I was merely casting us the way were were looked upon by the complainers.--<< >> 04:56, 5 September 2006 (UTC) - I think the two Irwin news articles that are there now probably should do the trick in that case. The "main" (for want of a better word) is still protected, and I'm fine with that as a cooling off thing. Obviously the articles still need to be funny and not just stupid, I have no problem deleting the crap one-liners, but articles of sufficient quality shouldn't be removed, as long as they are funny. Though articles like yours generally do stop others on the subject, like the similar pluto one. And I know you didn't delete or move anything, but the talk page of this article seems the only place to discuss this, and someone did. Anyway, I can't think of anything too wrong with things as they are now now. Your triple-header attack has weakened my agumentative powers :). Must sleep to regenerate them for the next person to fall foul of Spang's Unwritten Rules Of Uncyclopedia™ ;) • Spang • ☃ • talk • 05:32, 5 September 2006 (UTC) - yes to the extend that Uncyclopedia is offensive. Rev. Zim (Talk) Get saved! 13:36, 5 September 2006 (UTC) edit Are the other Steve Irwin articles offensive? - Unsure. AM IRC-ers were pretty unanimous that the answer is "yes", PM IRC-ers have been 50-50, but many more people are piping up about not wanting to "censor" the wiki, though I think holding off until maybe the country of Australia, which regarded Irwin as a national hero, has had a LITTLE time to mourn would be nice, in the very least. I am relatively neutral in the whole affair, but prefer to err on the side of taste, when given the option.--<< >> 04:25, 5 September 2006 (UTC) - Possibly, but what does that have to do with it? I thought we loved offending people.—Sir Mandaliet ♠ CUN PS VFH GN (talk) 04:29, 5 September 2006 (UTC) - The plan wasn't to remove it forever, just temporarily, as things seemed to be snowballing as people were trying to one-up each other.--<< >> 04:34, 5 September 2006 (UTC) - Irrelevant. It's an UnNews article. News is about stuff that's new. Holding off on the joke will make it not new, and therefore not worth doing as a news article, when you decide the time is right. (22.3 years, for those not in the know). • Spang • ☃ • talk • 04:40, 5 September 2006 (UTC) - 22.3 years? Am I missing something witty?--<< >> 04:46, 5 September 2006 (UTC) - Yes, 22.3 years. (Near the bottom of the plot section) (watch it though, it may even be relevant to this discussion). • Spang • ☃ • talk • 04:52, 5 September 2006 (UTC) edit Is the IRC off-limits for making snap decisions? HELL NO. If we need to make a call on weather something is cruel or otherwise menacing, we need the ability to make a quick move on the matter. This article was placed here to be a hint to others that maybe it would be best to leave the "HAHAHA! That Croc Hunter moron is finally dead," to lower-quality humor sites. Feel free to vote on this here, or take it to the Dump instead. This is the issue here where I have the strongest opinion.--<< >> 04:25, 5 September 2006 (UTC) - No, but I'm not sure we all agree on what requires a snap judement. Did you actually read the "main" article on it that wasn't yours before codeine moved it? "HAHAHA! That moron..." it was not. • Spang • ☃ • talk • 04:40, 5 September 2006 (UTC) - I think a 24 hour lockup would be appropriate--Shandon 05:33, 5 September 2006 (UTC) - How about if a snap judgement on IRC is deemed necessary, the snap judgement maker(s) still set up a discussion about it somewhere on the wiki, about whether to uphold the snap decision or reverse it? Then everyone wins, probably. • Spang • ☃ • talk • 00:41, 6 September 2006 (UTC) edit Is it any good now? I think this article and the other one cover it. I've added the requisite joke about thumbs up aaaaaarses - David Gerard 09:53, 5 September 2006 (UTC)
http://uncyclopedia.wikia.com/wiki/UnNews_talk:Heartless_bastards_make_HILARIOUS_Steve_Irwin-based_UnNews_story
CC-MAIN-2015-32
refinedweb
1,732
68.5
Pierre Ossman wrote:> On Wed, 29 Oct 2008 16:26:07 +0200> Adrian Hunter <ext-adrian.hunter@nokia.com> wrote:> >> Pierre Ossman wrote:>>> You also need to adjust the sg list when you change the block count.>>> There was code there that did that previously, but it got removed in>>> 2.6.27-rc1.>> That is not necessary. It is an optimisation. In general, optimising an>> error path serves no purpose.>>> > Actually, it is a requirement that some drivers rely on. There is also> a WARN_ON(), or possibly a BUG_ON() in the request handling path. I> think it only triggers with MMC_DEBUG though...OK>>> Some concerns here:>>>>>> 1. "brq.data.blocks > 1" doesn't need to be optimised into its own>>> variable. It just obscures things.>> But you have to assume that no driver changes the 'blocks' variable e.g.>> counts it down.> > That would be a no-no. I guess this is not properly documented, but it> is strongly implied that the request structure should be read-only> except for fields that contain result data. ;)> >> It is not an optimisation, it is just to improve>> reliability and readability. What does it obscure?> > if-clauses look like they're different, but they're not.Well, the if clauses are different anyway, but no matter.>>> 3. You should check all errors, not just data.error and ETIMEDOUT.>> No. Data timeout is a special case. The other errors are system errors.>> If there is a command error or stop error (which is also a command error)>> it means either there is a bug in the kernel or the controller or card>> has failed to follow the specification. Under those circumstances> > Controllers do not follow the specs. Welcome to reality. :)> > (sad fact: I don't think there is a single controller out there that> follows the specs to 100%).> > Anyway, for this specific case, a controller might not be able to> differentiate between command and data timeouts. Or it might report> "data error" for anything unexpected (I seem to recall one of the> currently supported controllers behaves like this).> > Generally it's about being as flexible as we can be. This is a world of> crappy hardware, so strict spec adherence just leads to a lot of hurt.> And I have the scars to prove it. :)I agree with your philosophy, although I was merely letting it go back tothe way it was - which I thought was safer then introducing new behaviourinto an undefined situation.>>> 4. You should first report the successfully transferred blocks as ok.>> That is another optimisation of the error path i.e. not necessary. It>> is simpler to just start processing the request again - which the patch>> does.>>> > I suppose.> >>> You might also want to print something so that it is>>> visible that the driver retried the transfer.>> There are already two error messages per sector (one from this function>> and one from '__blk_end_request()', so another message is too much.>>> > I didn't mean an error for the bad sector, but something like> "encountered an error during multi-block transfer. retrying...". This> could save a lot of time and headache during debugging in the future.> >> @@ -362,12 +380,19 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)>> #endif>> }>> >> - if (brq.cmd.error || brq.data.error || brq.stop.error)>> + if (brq.cmd.error || brq.stop.error)>> + goto cmd_err;>> +>> + if (brq.data.error == -ETIMEDOUT && rq_data_dir(req) == READ) {>> + spin_lock_irq(&md->lock);>> + ret = __blk_end_request(req, -EIO, brq.data.blksz);>> + spin_unlock_irq(&md->lock);>> + continue;>> + }>> +> > This could use a comment explaining that if we've reached this point,> we know that it was a single sector that failed.> >> + if (brq.cmd.error)>> goto cmd_err;>> >> - /*>> - * A block was successfully transferred.>> - */>> spin_lock_irq(&md->lock);>> ret = __blk_end_request(req, 0, brq.data.bytes_xfered);>> spin_unlock_irq(&md->lock);> > Shouldn't this comment stay now? :)OKFrom ceb711896b3c0cbc1c01b8f0c3733cedf3d88843 Mon Sep 17 00:00:00 2001From: Adrian Hunter <ext-adrian.hunter@nokia.com>Date: Thu, 16 Oct 2008 13:13:08 +0300Subject: [PATCH] mmc_block: ensure all sectors that do not have errors are readIf a card encounters an ECC error while reading a sector it willtimeout. Instead of reporting the entire I/O request as havingan error, redo the I/O one sector at a time so that all readablesectors are provided to the upper layers.Signed-off-by: Adrian Hunter <ext-adrian.hunter@nokia.com>--- drivers/mmc/card/block.c | 68 ++++++++++++++++++++++++++++++++++----------- 1 files changed, 51 insertions(+), 17 deletions(-)diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.cindex 5b46ec9..040b57e 100644--- a/drivers/mmc/card/block.c+++ b/drivers/mmc/card/block.c@@ -231,7 +231,7 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) struct mmc_blk_data *md = mq->data; struct mmc_card *card = md->queue.card; struct mmc_blk_request brq;- int ret = 1;+ int ret = 1, disable_multi = 0; mmc_claim_host(card->host); @@ -253,6 +253,14 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req).+ */+ if (disable_multi && brq.data.blocks > 1)+ brq.data.blocks = 1;+ if (brq.data.blocks > 1) { /* SPI multiblock writes terminate using a special * token, not a STOP_TRANSMISSION request.@@ -281,6 +289,16 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) brq.data.sg = mq->sg; brq.data.sg_len = mmc_queue_map_sg(mq); + /*+ * Some drivers expect the sg list to be the same size as the+ * request, which it won't be if we have fallen back to do+ * one sector at a time.+ */+ if (disable_multi) {+ brq.data.sg->length = 512;+ brq.data.sg_len = 1;+ }+ mmc_queue_bounce_pre(mq); mmc_wait_for_req(card->host, &brq.mrq);@@ -292,8 +310,17 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * until later as we need to wait for the card to leave * programming mode even when things go wrong. */- if (brq.cmd.error || brq.data.error || brq.stop.error)+ if (brq.cmd.error || brq.data.error || brq.stop.error) {+ if (brq.data.blocks > 1 && rq_data_dir(req) == READ) {+ /* Redo read one sector at a time */+ printk(KERN_WARNING "%s: error, retrying using "+ "single block read\n",+ req->rq_disk->disk_name);+ disable_multi = 1;+ continue;+ } status = get_card_status(card, req);+ } if (brq.cmd.error) { printk(KERN_ERR "%s: error %d sending read/write "@@ -350,8 +377,20 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) #endif } - if (brq.cmd.error || brq.data.error || brq.stop.error)+ if (brq.cmd.error || brq.stop.error || brq.data.error) {+ if (rq_data_dir(req) == READ) {+ /*+ * After an error, we redo I/O one sector at a+ * time, so we only reach here after trying to+ * read a single sector.+ */+ spin_lock_irq(&md->lock);+ ret = __blk_end_request(req, -EIO, brq.data.blksz);+ spin_unlock_irq(&md->lock);+ continue;+ } goto cmd_err;+ } /* * A block was successfully transferred.@@ -373,25 +412,20 @@ static int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req) * If the card is not SD, we can still ok written sectors * as reported by the controller (which might be less than * the real number of written sectors, but never more).- *- * For reads we just fail the entire chunk as that should- * be safe in all cases. */- if (rq_data_dir(req) != READ) {- if (mmc_card_sd(card)) {- u32 blocks;+ if (mmc_card_sd(card)) {+ u32 blocks; - blocks = mmc_sd_num_wr_blocks(card);- if (blocks != (u32)-1) {- spin_lock_irq(&md->lock);- ret = __blk_end_request(req, 0, blocks << 9);- spin_unlock_irq(&md->lock);- }- } else {+ blocks = mmc_sd_num_wr_blocks(card);+ if (blocks != (u32)-1) { spin_lock_irq(&md->lock);- ret = __blk_end_request(req, 0, brq.data.bytes_xfered);+ ret = __blk_end_request(req, 0, blocks << 9); spin_unlock_irq(&md->lock); }+ } else {+ spin_lock_irq(&md->lock);+ ret = __blk_end_request(req, 0, brq.data.bytes_xfered);+ spin_unlock_irq(&md->lock); } mmc_release_host(card->host);-- 1.5.4.3
http://lkml.org/lkml/2008/12/5/101
CC-MAIN-2014-52
refinedweb
1,238
59.09
This small C# program will backup the SQL database you specify and then upload it to the FTP server. It will also delete the backups that are a specified number of days old except on the specified day. E.g. Keep Sunday, so that you are left with daily / weekly backups. You will have a daily backup for x days (e.g. 14) and a weekly backup that you can keep indefinitely. The backups are named DatabaseName_full_YYYYMMDD.bak. The program can be altered easily to change these parameters. The problem is how to back up an SQL database and then send it via FTP to a remote server. This is to avoid having to take backups off-site using CD / backup drive etc. There are tools to buy and there are scripts that use shell to FTP but I couldn't find a .NET answer to the problem. I did find a T-SQL script and this was the basis for a re-write in C# using System.Net namespace. System.Net All the code required is in the *.zip file. You need to specify the FTP details: const string ftpServerURI = "ftpserver.com"; // FTP server const string ftpUserID = "username"; // FTP Username const string ftpPassword = "password"; //FTP Password const string strCon = "Data Source=ServerInstance;Initial Catalog=master; Persist Security Info=True;Integrated Security=True;MultipleActiveResultSets=True;"; // Change SQLDBSERVER to the name of the SQL Server you are using const string drive = "D"; // The local drive to save the backups to const string LogFile = "D:\\Backup\\Logs\\SQLBackup.log"; // The location on the local Drive of the log files. const int DaysToKeep = 31; // Number of days to keep the daily backups for. const DayOfWeek DayOfWeekToKeep = DayOfWeek.Sunday; // Specify which daily backup to keep indefinitely. If you are unsure of how this works, then cut and paste the SQL into Management Studio so you can see exactly what the query will return. SqlCommand comSQL = new SqlCommand("select name from sysdatabases " + "where name not in('tempdb','model','Northwind','AdventureWorks','master') # order by name ASC", new SqlConnection(strCon)); // need to specify here which databases you do not want to back up. Please see this article on MSDN. The new .NET classes for FTP I find are poorly documented and much debugging was required to utilise the examples from MSDN. Even if you do not require a backup program, I hope you can make some use of the FTP functions I have written. private static bool FTPDeleteFile(Uri serverUri, NetworkCredential Cred) If it does not exist, then the error is trapped gracefully. If some other error occurs, then this will be entered in the logfile. Despite much searching, I could not find an example of checking if a file exists before trying to delete it. So this function will try and delete the file anyway even if it does not exist. private static bool FTPMakeDir(Uri serverUri, NetworkCredential Cred) You cannot upload to a directory that does not already exist on the FTP server, this function will create it. It recursively works through the subdirectories of the URI and creates each one in turn. If the subDir already exists, then this is trapped gracefully. If some other error occurs, then this will be entered in the logfile. private static bool FTPUploadFile (String serverPath, String serverFile, FileInfo LocalFile, NetworkCredential Cred) This will upload the file to the FTP Server. It uses FTPMakeDir to make the directory if it does not already exist. FTPMakeDir This code is currently being used to backup around 10 databases in a SQL Server 2005 instance on a 1and1 server which has a local FTP backup server. The database backups range from a few KB to 200 M.
http://www.codeproject.com/Articles/29891/Backup-Microsoft-SQL-Database-and-Upload-to-FTP?fid=1527390&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None&fr=11
CC-MAIN-2014-52
refinedweb
615
63.49
I'm new to C++ and here's a simple assignment. I'm not sure how to pull this off... ANY help will be appreciated . Here's the code for the assignment... You have to implement a function called approachPi that takes no parameters, and has the return type double. This function returns the next element of a series, i.e. on the first call of the first element is returned, on the second call the second element is returned and so forth. The specific series that your function must use is the Gregory series (Difficult to write the formula here). I attached the formula. This means on the first call to the function e1 = 4 will be returned, on second call e2 = 4(1 + -1/3) will be returned, and so on. This goes into infinity. ** The main file ** #include <iostream> #include "Functions.h" using namespace std; int main() { for(int i=0;i<3;++i) cout<<"Step "<<i+1<<":"<<approachPi()<<endl; return 0; } ** The Functions.h file ** #ifndef FUN #define FUN double approachPi(); #endif ** My Functions.C file ** #include <iostream> #include <cmath> #include "Functions.h" using namespace std; double approachPi() { double num; double sum; int size; for(int k = 1; k < size; k++) { cout << "Please enter a number: "; cin >> num; sum = num * pow(-1,(k+1))/((2*k)-1); cout << sum << endl; } return sum; } ** Makefile ** main: Functions.o Prac1t1.o g++ -static Prac1t1.o Functions.o -o main Prac1t1.o : Prac1t1.C g++ -c Prac1t1.C Functions.o : Functions.C g++ -c Functions.C
https://www.daniweb.com/programming/software-development/threads/373050/help-needed-with-c-assignment
CC-MAIN-2017-39
refinedweb
256
68.77
» Frameworks » Struts Author Duplicate form submission - Synchronizer Token Pattern ravi janap Ranch Hand Joined: Nov 04, 2000 Posts: 389 posted Jul 15, 2005 17:01:00 0 Hi I have provided here simple steps needed to implement synchronizer token pattern for the benefit of the readers. Avoid duplicate form submission using Synchronizer Token Pattern in a STRUTS-based application How to prevent a duplicate form submission on a Struts JSP page. The duplicate form submission can occur for any of the following reasons: (a) When a user clicks more than once on a submit button before the response is sent back (b) When a client clicks on the Back button in the browser or simply refreshes the pages (c) When the user accesses the web page by returning to a previously bookmarked page. A typical STRUTS based implementation for a JSP page consists of the following components: (a) DisplayAction executed before the JSP page �MyPage� is displayed. (b) The JSP page �MyPage� containing form fields and a submit button (c) SubmitAction executed as a result of clicking submit on MyPage JSP page. Follow these simple steps: Step 1: DisplayAction Call saveToken(request): Puts the Token in the session. It also get puts in the request as a hidden form field by the Struts <html:form> tag if the Token is found in the session. Pseudo-code: public class DisplayAction extends Action { protected ActionForward protectedExecute( ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws Exception { ActionForward forward = mapping.findForward(ForwardName.SUCCESS); MyForm myForm = (MyForm) form; saveToken(request); return forward; } } Step 2: MyPage pseudo-code <html> <form name="myForm" method="post" action="submitAction"> <input type="text" name="myField" size="10">\ <input type="button" value="Submit"> </form> </html> Do a right click with your mouse in the browser and search for the hidden fields. <form name="myForm" method="post" action="submitAction"> <input type="hidden" name="org.apache.struts.taglib.html.TOKEN" value="41a390f7a1077ae74371625475760a7a"> {You don�t have to write this line of code and it gets put automatically as a result of call to saveToken method in the DisplayAction} <input type="text" name="myField" size="10">\ <input type="button" value="Submit"> </form> Step 3: SubmitAction)) { resetToken(request); //code to submit data } else { //retrieve the results of previous submittal from the //session saveToken(request); } return forward; } } Thanks -- Ravi [ July 15, 2005: Message edited by: Ravindra Janapareddy ] SCJP, SCJD, SCWCD, SCBCD, SCEA Lucian Ochian Greenhorn Joined: Nov 23, 2004 Posts: 7 posted Dec 05, 2005 07:38:00 0 Ravi, your Pseudo-code is almost perfect, but there is a little bug there. In the step 3, you have to remove the else statement where you save the token again. If you do that, this can happen: 1. user hits the load action and save the token 2. user hits the save action and the token is reset 3. user hits back button in the browser 4. user submits again, hits the save action, the token is not valid and it's set again(because of that else) 5. user hits back button in the browser for the second time 6. user submits again the form, hits the save action, and guess what? the token is valid and the data is submitted again. Regards, Lucian PS: Anyway, your example was very helpful for me steph thomas Greenhorn Joined: Mar 05, 2009 Posts: 1 posted Mar 05, 2009 08:28:14 0 I have implemented this pattern (Synchronizer Token ) as above and it works well on my jsp. When I hit the back button it doesn't give me any errors and the user can continue filling out the form. The issue that I am now encountering is that I have added to my jsp: enctype="multipart/form-data" to my <html:form > for the <html:file > for attachments to work. When I am now testing the back button, I am getting an error: webpage expired. I believe that it's due to the encryption type and the form is being submitted differently than before. Is there any way around this? At the end of the day. I just need to have the back button disabled and still able to upload files. O. Ziggy Ranch Hand Joined: Oct 02, 2005 Posts: 430 I like... posted Oct 22, 2010 03:16:36 0 Hi, On this line if(isTokenValid(request)) { resetToken(request); //code to submit data } else { //retrieve the results of previous submittal from the //session saveToken(request); } How will this work if the design is such that the same action class acts as both the PreActino and PostAction class? I have a page which does have a PreAction action and when the user submits the form the postAction class is called but the user is returned back to the same page meaning that they are allowed to make further changes to the same form and thus calling the postAction class without going through the preAction class. Ta Shailesh Vaishampayan Greenhorn Joined: Sep 19, 2010 Posts: 2 posted Oct 23, 2010 04:16:13 0 Hi All, First of all thank you for letting me know this pattern. O. Ziggy : I think the flow would be as follows : 1) User comes through pre-action class on jsp page. 2) Now tocken has been saved and should be visible on jsp page. 3)User fills-in the form and submits it. 4)post-action class is called and as a valid tocken is found in it ,flow comes in if part. 5)tocken is reset but user has entered some invalid data, or there there is some more info that is needed, so he should be brought to form again to allow him to make changes. In that case we will not reset the tocken Now he is brought through post-action class instead of preaction class as Ziggy is suggesting. 5) so when user comes to post class for the second time tocken is not reset and is still valid which willl allow the user to submit the form again . Also, I think there is no need to have that else part. Instead the code segment should look like this as follows :)) { //here check if you want the user to submit the form or take him to the form again to allow him to make changes if( any error in filling the form or incomplete form) { // return to form page without resetting the tocken } resetToken(request); //code to submit data return forward; } //retrieve the results of previous submittal from the //session or do whatever as per your need saveToken(request); return forward; } } Moises Lejter Greenhorn Joined: Aug 10, 2006 Posts: 3 posted Feb 03, 2013 14:19:25 0 Hi! We just implemented this in an application I am working on - but I have a doubt about our implementation (which is much like yours): shouldn't the calls to isTokenValid() and resetToken() execute within a synchronized block? Otherwise, I could see two clients issue request that would get processed such that the first client would execute isTokenValid(), get a true result then get interrupted to have the second client's thread execute. In such a scenario, wouldn't they both process the request "in parallel" ? Or alternatively, call isTokenValid(request,true), which would have this call automatically reset the token, if it was valid, before it returns... Moises I agree. Here's the link: subject: Duplicate form submission - Synchronizer Token Pattern Similar Threads How do I use tokens to prevent user from multiple submission saveToken() without html:form? form resubmission Prevent multiple submits Preventing multiple posts All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/51602/Struts/Duplicate-form-submission-Synchronizer-Token
CC-MAIN-2015-18
refinedweb
1,276
54.66
In this document - Defining Styles - Applying Styles and Themes to the UI See also - Style and Theme Resources R.stylefor Android styles and themes R.attrfor all style attributes A style is a collection of attributes that specify the look and format for a View or window. A style can specify attributes such as height, padding, font color, font size, background color, and much more. A style is defined in an XML resource that is separate from the XML that specifies the layout. For example, by using a style, you can take this layout XML: <TextView android: And turn it into this: <TextView android: The attributes related to style have been removed from the layout XML and put into a style definition called CodeFont, which is then applied using the android:textAppearance attribute. The definition for this style is covered in the following section. Styles in Android share a similar philosophy to cascading stylesheets in web design—they allow you to separate the design from the content. A theme is a style applied to an entire Activity or app, rather than an individual View, as in the example above. When a style is applied as a theme, every view in the activity or app applies each style attribute that it supports. For example, if you apply the same CodeFont style as a theme for an activity, then all text inside that activity appears in a green monospace font. Defining Styles To create a set of styles, save an XML file in the res/values/ directory of your project. The name of the XML file must use the .xml extension, and like other resources, it must use lowercase, underscores, and be saved in the res/values/ folder. The root node of the XML file must be <resources>. For each style you want to create, complete the following series of steps: - Add a <style>element to the file, with a namethat uniquely identifies the style. - For each attribute of that style, add an <item>element, with a namethat declares the style attribute. The order of these elements doesn't matter. - Add an appropriate value to each <item>element. Depending on the attribute, you can use values with the following resource types in an <item> element: - Fraction - Float - Boolean - Color - String - Dimension - Integer You can also use values with a number of special types in an <item> element. The following list of special types are unique to style attributes: - Flags that allow you to perform bitwise operations on a value. - Enumerators consisting of a set of integers. - References which are used to point to another resource. For example, you can specify the particular value for an android:textColor attribute—in this case a hexadecimal color—or you can specify a reference to a color resource so that you can manage it centrally along with other colors. The following example illustrates using hexadecimal color values in a number of attributes: <resources> <style name="AppTheme" parent="Theme.Material"> <item name="colorPrimary">#673AB7</item> <item name="colorPrimaryDark">#512DA8</item> <item name="colorAccent">#FF4081</item> </style> </resources> And the following example illustrates specifying values for the same attribute using references: <resources> <style name="AppTheme" parent="Theme.Material"> <item name="colorPrimary">@color/primary</item> <item name="colorPrimaryDark">@color/primary_dark</item> <item name="colorAccent">@color/accent</item> </style> </resources> You can find information on which resource types can be used with which attributes in the attribute reference, R.attr. For more information on centrally managing resources, see Providing Resources. For more information on working with color resources, see More Resource Types. Here's another example file with a single style: <?xml version="1.0" encoding="utf-8"?> <resources> <style name="CodeFont" parent="@android:style/TextAppearance.Medium"> <item name="android:textColor">#00FF00</item> <item name="android:typeface">monospace</item> </style> </resources> This example style can be referenced from an XML layout as @style/CodeFont (as demonstrated in the introduction above). The parent in the <style> element is optional and specifies the resource ID of another style from which this style should inherit attributes. You can then override the inherited style attributes. A style that you want to use as an activity or app theme is defined in XML exactly the same as a style for a view. How to apply a style as an app theme is discussed in Apply a theme to an activity or app. Inheritance The parent attribute in the <style> element lets you specify a style from which your style should inherit attributes. You can use this to inherit attributes from an existing style and define only the attributes that you want to change or add. You can inherit from styles that you've created yourself or from styles that are built into the platform. For example, you can inherit the Android platform's default text appearance and modify it: <style name="GreenText" parent="@android:style/TextAppearance"> <item name="android:textColor">#00FF00</item> </style> For information about inheriting from styles defined by the Android platform, see Using Platform Styles and Themes. If you want to inherit from styles that you've defined yourself, you don't have to use the parent. Instead, you can use dot notation by prefixing the name of the style you want to inherit to the name of your new style, separated by a period. For example, to create a new style that inherits the CodeFont style defined above, but make the color red, you can create the new style like this: <style name="CodeFont.Red"> <item name="android:textColor">#FF0000</item> </style> Notice that there is no parent in the <style> tag, but because the name begins with the CodeFont style name, this new style inherits all style attributes from the CodeFont style. The new style then overrides the android:textColor attribute to make the text red. You can reference this new style as @style/CodeFont.Red. You can continue inheriting styles like this as many times as you'd like by chaining names with periods. For example, you can extend CodeFont.Red to be bigger, with: <style name="CodeFont.Red.Big"> <item name="android:textSize">30sp</item> </style> This style inherits from the CodeFont.Red style, that itself inherits from the CodeFont style, then adds the android:textSize attribute. Note: This technique for inheritance by chaining together names only works for styles defined by your own resources. You can't inherit Android built-in styles this way, as they reside in a different namespace to that used by your resources. To reference a built-in style, such as TextAppearance, you must use the parent attribute. Style Attributes Now that you understand how a style is defined, you need to learn what kind of style attributes—defined by the <item> element—are available. The attributes that apply to a specific View are listed in the corresponding class reference, under="flag" ... /> You can instead create a style for the EditText element that includes this property: <style name="Numbers"> <item name="android:inputType">number</item> ... </style> So your XML for the layout can now implement this style: <EditText style="@style/Numbers" ... /> For a reference of all style attributes available in the Android Framework, see the R.attr reference. For the list of all available style attributes available in a particular package of the support library, see the corresponding R.attr reference. For example, for the list of style attributes available in the support-v7 package, see the R.attr reference for that package. Keep in mind that not all view objects accept all the same style attributes, so you should normally refer to the specific subclass of View for a list of the supported style attributes. However, if you apply a style to a view that doesn't support all of the style attributes, the view applies only those attributes that are supported and ignores the others. As the number of available style attributes is large, you might find it useful to relate the attributes to some broad categories. The following list includes some of the most common categories: - Default widget styles, such as android:editTextStyle - Color values, such as android:textColorPrimary - Text appearance styles, such as android:textAppearanceSmall - Drawables, such as android:selectableItemBackground You can use some style attributes to set the theme applied to a component that is based on the current theme. For example, you can use the android:datePickerDialogTheme attribute to set the theme for dialogs spawned from your current theme. To discover more of this kind of style attribute, look at the R.attr reference for attributes that end with Theme. Some style attributes, however, are not supported by any view element and can only be applied as a theme. These style attributes apply to the entire window and not to any type of view. For example, style attributes for a theme can hide the app title, hide the status bar, or change the window's background. These kinds of style attributes don't belong to any view object.To discover these theme-only style attributes, look at the R.attr reference for attributes begin with window. For instance, windowNoTitle and windowBackground are style attributes that are effective only when the style is applied as a theme to an activity or app. See the next section for information about applying a style as a theme. Note: Don't forget to prefix the property names in each <item> element with the android: namespace. For example: <item name="android:inputType">. You can also create custom style attributes for your app. Custom attributes, however, belong to a different namespace. For more information about creating custom attributes, see Creating a Custom View Class. Applying Styles and Themes to the UI There are several ways to set a style: - To an individual view, by adding the styleattribute to a View element in the XML for your layout. - To an individual view, by passing the style resource identifier to a Viewconstructor. This is available for apps that target Android 5.0 (API level 21) or higher. - Or, to an entire activity or app, by adding the android:themeattribute to the <activity>or <application>element in the Android manifest. When you apply a style to a single View in the layout, the attributes defined by the style are applied only to that View. If a style is applied to a ViewGroup, the child View elements don't inherit the style attributes; only the element to which you directly apply the style applies its attributes. However, you can apply a style so that it applies to all View elements—by applying the style as a theme. To apply a style definition as a theme, you must apply the style to an Activity or app in the Android manifest. When you do so, every View within the activity or app applies each attribute that it supports. For example, if you apply the CodeFont style from the previous examples to an activity, then the style is applied to all view elements that support the text style attributes that the style defines. Any view that doesn't support the attributes ignores them. If a view supports only some of the attributes, then it applies only those attributes. Apply a style to a view Here's how to set a style for a view in the XML layout: <TextView style="@style/CodeFont" android: Now this TextView uses the style named CodeFont. (See the sample above, in Defining Styles.) Note: The style attribute doesn't use the android: namespace prefix. Every framework and Support Library widget has a default style that is applied to it. Many widgets also have alternative styles available that you can apply using the style attribute. For example, by default, an instance of ProgressBar is styled using Widget.ProgressBar. The following alternative styles can be applied to ProgressBar: Widget.ProgressBar.Horizontal Widget.ProgressBar.Inverse Widget.ProgressBar.Large Widget.ProgressBar.Large.Inverse Widget.Progress.Small Widget.ProgressBar.Small.Inverse To apply the Widget.Progress.Small style to a progress bar, you should supply the name of the style attribute as in the following example: <ProressBar android: To discover all of the alternative widget styles available, look at the R.style reference for constants that begin with Widget. To discover all of the alternative widget styles available for a support library package, look at the R.style reference for fields that begin with Widget. For example, to view the widget styles available in the support-v7 package, see the R.style reference for that package. Remember to replace all underscores with periods when copying style names from the reference. Apply a theme to an activity or app To set a theme for all the activities of your app, open the AndroidManifest.xml file and edit the <application> tag to include the android:theme attribute with the style name. For example: <application android: If you want a theme applied to just one activity in your app,.Material.Light"> <item name="android:windowBackground">@color/custom_theme_color</item> <item name="android:colorBackground">@color/custom_theme_color</item> </style> Note: The color needs to be supplied as a separate resource here because the android:windowBackground attribute only supports a reference to another resource; unlike android:colorBackground, it can't be given a color literal.) Now use CustomTheme instead of Theme.Light inside the Android manifest: <activity android: To discover the themes available in the Android Framework, search the R.style reference for constants that begin with Theme_. You can further adjust the look and format of your app by using a theme overlay. Theme overlays allow you to override some of the style attributes applied to a subset of the components styled by a theme. For example, you might want to apply a darker theme to a toolbar in an activity that uses a lighter theme. If you are using Theme.Material.Light as the theme for an activity, you can apply ThemeOverlay.Material.Dark to the toolbar using the android:theme attribute to modify the appearance as follows: - Change the toolbar colors to a dark theme but preserve other style attributes, such as those relating to size. - The theme overlay applies to any children inflated under the toolbar. You can find a list of ThemeOverlays in the Android Framework by searching the R.style reference for constants that begin with ThemeOverlay_. Maintaining theme compatibility To maintain theme compatibility with previous versions of Android, use one of the themes available in the appcompat-v7 library. You can apply a theme to your app or activity, or you can set it as the parent, when creating your own backwards compatible themes. You can also use backwards compatible theme overlays in the Support Library. To find a list of the available themes and theme overlays, search the R.style reference in the android.support.v7.appcompat package for fields that being with Theme_ and ThemeOverlay_respectively. Newer versions of Android have additional themes available to apps, and in some cases you might want to use these themes while still being compatible with older versions. You can accomplish this through a custom theme that uses resource selection to switch between different parent themes based on the platform version. For example, here is the declaration for a custom theme. It would go in an XML file under res/values (typically res/values/styles.xml): <style name="LightThemeSelector" parent="android:Theme.Light"> ... </style> To have this theme use the material theme when the app is running on Android 5.0 (API Level 21) or higher, you can place an alternative declaration for the theme in an XML file in res/values-v21, but make the parent theme the material theme: <style name="LightThemeSelector" parent="android:Theme.Material.Light"> ... </style> Now use this theme like you would any other, and your app automatically switches to the material theme if it's running on Android 5.0 or higher. A list of the standard attributes that you can use in themes can be found at R.styleable.Theme. For more information about providing alternative resources, such as themes and layouts that are based on the platform version or other device configurations, see the Providing Resources document.
https://developer.android.com/guide/topics/ui/look-and-feel/themes.html?hl=ja
CC-MAIN-2018-05
refinedweb
2,682
53.92
table of contents NAME¶ XmGetAtomName — A function that returns the string representation for an atom "XmGetAtomName" "atoms" SYNOPSIS¶ #include <Xm/Xm.h> #include <Xm/AtomMgr.h> String XmGetAtomName( Display * display, Atom atom); DESCRIPTION¶ XmGetAtomName returns the string representation for an atom. It mirrors the Xlib interfaces for atom management but provides client-side caching. When and where caching is provided in Xlib, the routines will become pseudonyms for the Xlib routines. RETURN¶ Returns a string. The function allocates space to hold the returned string. The application is responsible for managing the allocated space. The application can recover the allocated space by calling XFree.
https://manpages.debian.org/bullseye/libmotif-dev/XmGetAtomName.3.en.html
CC-MAIN-2022-40
refinedweb
103
51.95
oid 0.0.2 OID # A library supporting the OID format. Usage # A simple usage example: import 'package:oid/oid.dart'; main() { var oid = new Oid("1.2.3.4.5"); print(oid); } Changelog # 0.0.1 # - Initial version import 'package:oid/oid.dart'; main() { } Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: oid: ^0.0.2 2. Install it You can install packages from the command line: with pub: $ pub get Alternatively, your editor might support pub get. Check the docs for your editor to learn more. 3. Import it Now in your Dart code, you can use: import 'package:oid/o.
https://pub.dev/packages/oid
CC-MAIN-2020-05
refinedweb
115
71.31
I'm writing some template code to determine if a given type can be passed as any argument to any available overload of a function. In the example below I've used the log function, but I've also tried this code on others in the math library, and the results are the same. The idea is to use function overloading and the sizeof operator to distinguish between cases where the type in question can legally be passed to the function in question (log, in this example). If it worked, we'd have sizeof(overload<type>(NULL)) == sizeof(True) when 'type' can be legally passed to log, and sizeof(overload<type>(NULL)) == sizeof(False) otherwise. This does seems to work for most types, but fails for std::string. Here's exactly how it fails: Under normal circumstances we have sizeof(overload<std::string>(NULL)) == sizeof(False), as we should. But, when I declare an overload of log that does take a string, it still doesn't trigger the sizeof(True) branch of the logic. Note that I don't actually want to declare log(std::string) function, I'm just testing this code to make sure that it's able to detect all possible overloads. At first I thought it just wasn't detecting overloads properly, but when I tried it with a user-defined class ('MyClass' in the example below), it worked fine: it produced sizeof(True) when log(MyClass) was declared, and sizeof(False) otherwise. #include <iostream> #include <math.h> template<int> struct TakesInt{}; struct True { }; struct False { // guarantees that sizeof(False) != sizeof(True) True array[2]; }; // takes anything; fall back if no match can be found template<typename T> False overload(...); // takes a specific type; does not actually call log template<typename T> True overload(TakesInt<sizeof(log(T()))>*); // As a test, this is an overload of log that takes a string. // I don't actually want to implement this, but it should make the compiler // think that a string is a valid argument. double log(std::string); // a placeholder for user defined class; could really be anything, // like an arbitrary number class struct MyClass{}; // declaring log for the arbitrary class above // note that this is the same as for the log(std::string) // if one works, the other should double log(MyClass); int main() { std::cout << sizeof(True) << '\t' << sizeof(False) << std::endl; std::cout << sizeof(overload<std::string>(NULL)) << std::endl; std::cout << sizeof(overload<double>(NULL)) << std::endl; std::cout << sizeof(overload<MyClass >(NULL)) << std::endl; return 0; } Here's the same issue w/o the SFINAE distraction: #include <iostream> namespace ns { struct string {}; } void bar(...) { std::cout << "void bar(...)\n"; } template<class T> void foo() { T x{}; bar(x); } void bar(ns::string) { std::cout << "void bar(ns::string)\n"; } int main() { foo<int>(); foo<ns::string>(); } Output: void bar(...) void bar(...) Lookup of a dependent function name will be performed: Therefore, the following example differs: #include <iostream> namespace ns { struct string {}; } void bar(...) { std::cout << "void bar(...)\n"; } template<class T> void foo() { T x{}; bar(x); } namespace ns { void bar(ns::string) { std::cout << "void bar(ns::string)\n"; } } int main() { foo<int>(); foo<ns::string>(); } Output: void bar(...) void bar(ns::string) For std::string, the only associated namespace is std. The global namespace is not associated and will not be searched in the OP's code. Therefore, the overload declared after the template definition will not be found. N.B. Please do not inject overloads into namespace std. This will lead to undefined behaviour as per [namespace.std]/1.
http://m.dlxedu.com/m/askdetail/3/2d52df754fe82a41549156298fbe3ecc.html
CC-MAIN-2018-30
refinedweb
600
59.74
Manages storage of images and other files, with metadata. Also offers an HTTP API done on Pyramid. Project description Scope keepluggable is an open source, (MIT licensed), highly configurable Python library to manage storage of images and other documents (any kind of file, really), with metadata. The documentation is at The file metadata can be stored in a different place than the file payload. This is recommended because many operations, such as listing files, do not involve actual file content, so you should avoid loading it. Also, payloads should be optimized for serving and metadata should be optimized for querying. For file payloads, we currently have implemented one backend that stores them in Amazon S3. There is also a very simple backend that stores files in the local filesystem (useful during development). For (optionally) storing the metadata we currently provide a base SQLAlchemy backend for you to subclass. In both cases, you can easily write other storage backends. Using this library you can more easily have your user upload images (or any kind of file) and enter metadata about them, such as name, description, date, place, alt text, title attribute etc. Some of the metadata is automatically found, such as file size, mime type, image size, aspect ratio, MD5 checksum etc. The code is highly decoupled so you can tweak the behaviour easily. The business rules are implemented in a separate layer (isolated from any of the storage strategies and any UI), called an “action” layer. (This is commonly known as a “service” layer, but we call it “action”.) This makes it possible for us to have any storage backends and use any web frameworks or other UI frameworks. Each application has its own business rules, therefore it is likely that you will subclass the provided action layer to tweak the workflow for your purposes. One such “action” is the pluggable policy for uploaded image treatment. For instance, the default policy converts the original uploaded image to the JPEG format (so it will never store an unecessarily large BMP), optionally stores the original image in whatever size it is, then creates configurable smaller versions of it. Some cameras do not rotate the photo, they just add orientation metadata to the image file, so keepluggable rotates it for you, before creating the thumbnails. Collaboration We want your help. We are open to feature requests, suggestions, bug reports and pull requests, in reverse order of openness. Migration to keepluggable 0.8 keepluggable 0.8 changes the way files are stored. How? - It separates namespaces using the “/” character rather than “-“. This creates a better user experience in the S3 Management console. - Now you can use only one bucket per environment if you wish to. Multiple keepluggable integrations (in a single app) can use the same bucket, because each keepluggable integration can use its own directories. - Between the bucket name and the file name you can create your own directory scheme (e. g. “/users/42/avatars/angry_mode/”). I am calling this a “middle path”. See the function get_middle_path() in the orchestrator.py file. A migration function is provided so you can update your old storages to keepluggable 0.8. See the method migrate_bucket() in the file amazon_s3.py. The names of the configuration settings also changed in 0.8. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/keepluggable/
CC-MAIN-2019-47
refinedweb
572
56.05
Hi. I have a problem running the following command: wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo where "replicatedfolderguid='941ABE42-4687-4A38-9C5D-AE3BA1ABCE9A'" call cleanupconflictdirectory It worked fine on one server, but returned "No Instance(s) Available." on an other. I can not find any else with this problem, so I was hoping anyone here could help me solving this. Have anyone else seen this problem? Thanks Julian Hi Julian, I am trying to solve the older threads here, at least trying to see if it was solved. If not, please let us know. For your problem I would first query the server with problem to check if the GUID can be found: dfsradmin rf list /rgname:testrg /attr:rfname,rfguid. No Instances found is usually a return when the referred folder is not found in the server. Please give a look in the following article for more hints about Replicated Folders: Thank you, F. Schubert System Administrator MCP | Microsoft Certified Professional MCTS 70-640 | Microsoft Certified Technology Specialist: Windows Server 2008 Active Directory, Configuration MCTS 70-642 |?
https://social.technet.microsoft.com/Forums/windowsserver/en-US/d07b3265-7ff8-4fc4-98f8-4c775949c5aa/problems-with-call-cleanupconflictdirectory-retuning-no-instances-available?forum=windowsserver2008r2branchoffice
CC-MAIN-2015-48
refinedweb
177
54.83
Installing CUDA and cuDNN on windows 10 This is an how-to guide for someone who is trying to figure our, how to install CUDA and cuDNN on windows to be used with tensorflow. It should be noted that at the time of writing this, tensor flow is supporting only upto CUDA version 9.0 and corresponding cuDNN libraries so please don’t download CUDA 9.2. Installing CUDA 9.0 on windows. Pre requisites : - NVIDIA GPU with compute capability of > 2.0 . Check your GPU here - Download CUDA version 9.0 Please note if your connection permits, please download the local version. That saves you from sitting around waiting for download to finish at the installation time. The download should be ~ 1.4 G. Once the download finishes, launch the installer and follow the defaults. It takes around 10–15 mins for installation to finish. Pleasy verify the files at the default install location after the installation finishes: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0 Installing cuDNN from NVIDIA First of all, register yourself at NVIDIA Developer site. Its an free registration and takes only a couple of mins. From there, the installation is a breeze Once registered, goto the download page and accept the terms and conditions. POst this download cuDNN v7.1.4 for CUDA 9.0 Once the files are downloaded locally, unzip them. Installing cuDNN is pretty straight forward. You just have to copy three files from the unzipped directory to CUDA 9.0 install location. For reference, NVIDIA team has put them in their own directory. So all you have to do is to copy file from : {unzipped dir}/bin/–> C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin {unzipped dir}/include/–> C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\include {unzipped dir}/lib/–> C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\lib That’s it. Testing it with tensor flow Install tensorflow-gpu using pip3 install tensorflow-gpu. Once that is done, fire up a python console do a from tensorflow import *. If you don’t see any errors, we are good. Another way you know that your GPU is being used by executing a keras model and having it use tensorflow as its backend. So at the runtime, you should see a message like this : 2018-08-05 23:43:32.091733: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2018-08-05 23:43:33.288310: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1392] Found device 0 with properties: name: GeForce GTX 1050 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.62 pciBusID: 0000:01:00.0 totalMemory: 4.00GiB freeMemory: 3.29GiB 2018-08-05 23:43:33.289799: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1471] Adding visible gpu devices: 0 2018-08-05 23:43:35.537890: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:952] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-08-05 23:43:35.538772: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:958] 0 2018-08-05 23:43:35.539309: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:971] 0: N 2018-08-05 23:43:35.540537: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1084] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3020 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) Happy coding I am a Devops Expert and Machine Learning enthusiast. Please find my original blog post at An Average Joe Akshay Source: Deep Learning on Medium
https://mc.ai/installing-cuda-and-cudnn-on-windows-10/
CC-MAIN-2020-24
refinedweb
646
53.78
The de facto standard API for networking is the sockets interface. It was created for Berkley Unix, and has become the standard interface. The Cleansocks library provides a C++-based interface to the standard sockets designed to preserve the concepts and procedures of the sockets library, while attempting to hide some of the C-ish grunginess. Users can declare sockets by type, use them to listen and connect, send and receive, and generally apply the usual socket operations, avoiding much of the low-level code needed to create and configure them. Cleansocks provides a couple of convenient extensions. A simple one which buffers inbound data, mainly to allow reading line by line, and a more complex one that supports creation of TLS client connections. And cleansocks reports errors using exceptions, which simplifies coding and make errors harder to miss and easier to locate. All objects and methods described in these pages are in the namespace cleansocks. You'll need to qualify names, or use some form of the using directive. Cleansocks is developed primarily on Linux, but made to work on Windows (over Winsock) as well, and it hides the differences between those two platforms. It should build easily enough on other Unix-like platforms.
http://sandbox.mc.edu/~bennet/cs423v2/cleansocks/intro.html
CC-MAIN-2019-09
refinedweb
205
61.46
Mountain Xpress 12.18.13 Independent news, arts and events for Western North Carolina OUR 20TH YEAR OF WEEKLY INDEPENDENT NEWS, ARTS & EVENTS FOR WESTERN NORTH CAROLINA VOL. 20 NO. 22 DECEmbER 18 - DECEmbER 24, 2013 8 plus: 28 charting the future of asheville tourism Gingerbread Competition has heart Holi-DIY festive handmade projects from across WNC Asheville’s Oldest Wine Store NOW OPEN WINE BAR/TAPROOM & CHEESE STORE AT THE WEINHAUS 675 hour Massage Certification Program Accepting Applications for April 2014 AshevilleMassageSchool.org • 828-252-7377 Discounts Available Holiday Specials Gift Baskets Party Platters • Stocking Stuffers 1 Case Wine = 12 Presents Host your Holiday Party at our Wine Bar—FREE! Artisan Cheese, Meats and Gourmet Products with a Local Flair Cheese Plates, Sandwiches & Salads • Custom Crafted Party Platters & Gift Baskets 86 Patton Avenue 828.254.6453 2 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com GIVE THE GIFT OF BILTMORE Wrap up 365 days of timeless beauty with an Annual Pass Elevate gift-giving to extraordinary levels with a Biltmore Annual Pass! A Biltmore Pass delivers a world of benefits and special offers, including: • Discounts on estate dining and shopping • Discounts on Candlelight Christmas Evenings tickets • Kids admitted free year-round • Discounts on tickets for family & friends • Exclusive offers during January Passholder Appreciation • Discounts on guided tours, specialty wine experiences, and outdoor activities • And much more! Purchase online at biltmore.com/annualpass mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 3 Pet Problems? We can help! Asheville Humane Society operates a Safety Net Program: a free resource to all Buncombe County residents. contEnts contact us pagE 36 Holi-DIY Crafts, from simple construction paper garlands to elaborate gingerbread houses, go hand in hand with the holidays. Here, Xpress readers offer some of their favorite festive projects and the memories those crafts bring to mind. coVER DEsign Laura Barry (828) 251-1333 fax (828) 251-1311 • • • • • Re-homing Tool Kit & Support Pet Behavior Help Spay/Neuter Assistance Financial Hardship Options Pet Friendly Housing Listings news tips & story ideas to nEws@mountainx.com letters to the editor to LEttERs@mountainx.com business news to BusinEss@mountainx.com a&e events and ideas to aE@mountainx.com 828.250.6430 • ashevillehumane.org Features nEws events can be submitted to caLEnDaR@mountainx.com or try our easy online calendar at mountainx.com/EVEnts food news and ideas to fooD@mountainx.com FREE iPad! Purchase or sell a home using Marshall Real Estate & receive a new iPad at closing! Contact: Brian Marshall at 828-243-0295 Serving Asheville & surrounding areas since 2000 8 DEstination ashEViLLE Local leaders chart course for more tourism development wellness-related events/news to mxhEaLth@mountainx.com. venues with upcoming shows cLuBLanD@mountainx.com get info on advertising at aDVERtisE@mountainx.com hoLiDay 22 winDow wonDERLanD Downtown merchants celebrate the holidays with window decorations place a web ad at wEBaDs@mountainx.com question about the website? wEBmastER@mountainx.com find a copy of xpress jtaLLman@mountainx.com wELLnEss store closing sale owners retiring 24 BREaking thE siLEncE Our VOICE reaches out to male victims of sexual abuse 5 5 7 14 16 21 34 46 48 55 61 62 63 LEttERs caRtoon: moLton caRtoon: BREnt BRown community caLEnDaR conscious paRty in thE spiRit smaLL BitEs smaRt BEts cLuBLanD moViEs cLassifiEDs fREEwiLL astRoLogy ny timEs cRosswoRD Holiday Specials & Discounts New Books | Calendars Greeting Cards 9:30 - 6 Monday thru Saturday 1 - 5 Sunday afternoons 28 housE paRty National Gingerbread House Competition has heart fooD a&E a&E 40 outwaRD BounD Rising Appalachia returns to Asheville for a solstice show 42 hot foR thE hoLiDays Bombs Away Cabaret’s XXXmas 4 DEcEmBER 18 - DEcEmBER 24, 2013 (under Stein Mart on Merrimon) 252-6255 T he Original Gina Smith fooD EDitoR/wRitER: Gina Smith EDitoRiaL assistants: Hayley Benton, Carrie Eidson, Lea McLellan moViE REViEwER & cooRDinatoR: Ken Hanke EDitoRiaL intERns: Brandy Carl, Max Miller, Micah Wilkins contRiButing EDitoRs: Jon Elliston, Peter Gregutt, Rob Mikulak TAPROOM & PIZZERIA 56 TAPS • est. 1994 • 100 BEERS SUN THU WED TUE MON Kids Eat FREE Pint Special Dr. Brown’s Team Trivia Live Jazz, Alien Music Club Live Music 29 TAPS • DARTS • BILLARDS LATE NIGHT MENU 828-255-0504 BARLEY’S UPSTAIRS come check out ... caRtoon By RanDy moLton 42 BILTMORE AVE. MON-SAT 11:30AM-TIL SUN 12PM-TIL BARLEYSTAPROOM.COM Climate change is more than just politics I’m writing today in regards to a story I recently saw in the Mountain Xpress [“Buncombe Commissioners Set Bar High for Carbon Reductions,” Dec. 17] and felt it necessary to comment on a reported statement from [Commissioner Mike] Fryar. [Xpress reporter Jake Frankel writes:] “Fryar blasted the move as ‘politics,’ questioning whether there really is any scientific consensus on global warming.” Mr. Fryar, there is most assuredly consensus on climate change (the preferred nomenclature of most scientists). Both NASA and the National Oceanic and Atmospheric Administration, two of the leading scientific organizations in our country, agree that there is ongoing, man-made climate change. We trust NASA to take us to the moon and Mars but we won't trust them here? The American Association for the Advancement of Science, the preeminent scientific organization in the U.S., has also reached consensus regarding this matter. Do you listen to your dentist when you have problems with your teeth? Or do you listen to your neighbor? When you need surgery, do you go to a surgeon? Or do you let the waiter at your favorite diner do the job? We need to stop listening to propaganda from TV talking heads, bloggers and the like, and start listening to the professionals. I hope you take the time to look at the evidence I've provided from three amazing sources. I am an unaffiliated voter and I can assure you this has nothing to do with politics. — Sean McNeal Asheville much. But the real reason I never before invested in health care insurance is that I was never exactly certain what I was We Want Your Junk garrisonrecycling@gmail.com theregenerationstation.blogspot.com • junkrecyclers.net 828.707.2407 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 5 CELEBRATE 21 YEARS OF THE OMNI GROVE PARK INN NATIONAL GINGERBREAD HOUSE COMPETITION™ Public viewing Sunday - Thursday, excluding holidays. November 20th, 2013 - January 2nd, 2014 6 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx. caRtoon By BREnt BRown Remember Newtown by advocating for gun reform Twenty-seven innocent people were killed in the Newtown, Conn., tragedy — 20 of those victims were children. In the year since the Newtown shooting, 11,126 people have died from gun violence. This year was marked by 24 mass shootings, which are defined by the FBI as shootings with four or more victims, such as the naval yard shooting [in Washington, D.C.] that killed 13 people. On average, 32 Americans are murdered with guns every day; an estimated eight of those killed are children and teens. And what is the response from elected officials to all this senseless violence? Well, the U.S. Congress did nothing. Of the less than 60 laws passed in 2013 (a historic low), none dealt with gun reform. Here in North Carolina, the Legislature slashed existing gun control laws, permitting guns on school campuses, playgrounds, restaurants and bars. They limited the ability of law enforcement to restrict multiple handgun permits to individuals and removed concealed-gun permit information from the public record. Supposing the founders intended citizens to have rights to whatever firearms they desire under the Second Amendment, which is debatable, no right is entirely unfettered. Certainly, one person’s right should not impinge on another’s safety. How many will die before we make common-sense gun reforms to protect our families? — Sarah Grace Zambon Fletcher asking for money! On these crowded sidewalks, there doesn’t seem to be an awareness of where anything is placed and how placement can restrict movement. Our sidewalks look cluttered and are inconsiderate of pedestrians. — Paul Viera Asheville • Wouldn’t it be great if your investment portfoilio aligned with your values? • What is if was comprised of companies focused on alternative energy,organic foods, energy efficiency, and clean water? • And what if it was fossil-fuel free? • We’ve got what you need. 877-235-3684 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 7 N E W S Destination Asheville Local leaders chart a course for tourism development By jakE fRankEL jfrankel@mountainx.com 251-1333 ext. 115 , socalled said. “Those are marketing dollars that you aren’t spending because they’re doing it for you.” On the other hand, a “negative complaint” via social media “resonates far more than something that’s positive,” he cautioned. And with thRough thEiR EyEs: Images 8 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com, “Authenticity is what separates Asheville from Greenville or any other place people want to go.” — mikE konzEn Photo by Jake Frankel. X mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 9 nEws by David Forbes dforbes@mountainx.com New mayor in town changing of thE guaRD: On Dec. 10, Judge Alan Thornburg swore in Esther Manheimer as Asheville’s new mayor. Photo by Alicia Funderburk Manheimer sworn in; Hunt named vice mayor A new Asheville City Council met Dec. 10, with Esther manheimer sworn in as mayor, marc hunt chosen as the new vice mayor, three development decisions postponed and neighborhood leaders raising concerns about issues in East Asheville. nEw facEs By 3:45 p.m., Council chambers were packed for a swearing-in ceremony as residents, supporters and family members gathered to see Manheimer and three Council members sworn in. The newest Council member, gwen wisler, joined reelected members cecil Bothwell and gordon smith. Council unanimously chose Hunt as vice mayor. jan Davis (vice mayor from 2009-11) remarked that he felt Hunt was best suited to “hold up the arms of the mayor” and attend to the position’s administrative role. Manheimer praised two-term Mayor terry Bellamy as bringing a new level of respect to the mayorship. Council presented Bellamy with a gavel plaque and a proclamation in appreciation for 13 years time as an elected official. In a brief speech, Manheimer said that Asheville is a diverse city, but that its different populations are united by a desire for an improving quality of life. In pursuing that, she said there are many challenges as well as opportunities. “Even after one hurdle is overcome, there will always be more,” Manheimer said. “We value fostering and supporting our small businesses, things like locally grown food, we want to stay focused on truly affordable 10 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com housing. These are the things that build a community.” She emphasized the importance of partnerships with Buncombe County and state government in pursuing these goals. “What your city needs to do for you is invest to bring about positive change in the community,” Manheimer said. “I believe that together we can work to grow our quality of life and continue to improve.” DEVELopmEnt DELays Earlier in the month, it looked as if the new Council might have a a trial by fire during its regular Dec. 10 meeting: Three development hearings were on the agenda: a controversial East Chestnut Street project, the 192-housingunit Avalon in South Asheville and a proposed overhaul of River Arts District zoning and development procedures But each was delayed. The Chestnut Street developer, PBCL, requested a delay; that hearing was postponed until Jan. 28. As for the Avalon project, some Council members voiced concern that it required a rezoning of industrial land and that it didn’t include an affordable housing component. Triangle Real Estate representatives asserted that the city has rezoned other industrial properties when another goal (such as dealing with its housing crunch) was more important and that the rents will be reasonable for the area. Council delayed the project until Feb. 12. The River Arts District development overhaul was also postponed to Jan. 14 at staff’s request, to give them more time to prepare the exact details of the changes in oversight. fRustRatED in East ashEViLLE During the public-comment portion of the meeting, kim martin-Engle, a leader of the EAST Community group, said that members are frustrated because city staff had stopped working with them on development planning and safety improvements for East Asheville. Martin-Engle asked staff to resume that work, including a corridor study for Tunnel Road similar to the one the city is conducting for the Haywood Road area. “We believe the city, working with residents and stakeholders, can create a boulevard entrance into and out of downtown,” MartinEngle told Council. “Presently, the perception of many drivers is [that Tunnel Road is] a roadway designed for speed. Tragically, since November 2012 two pedestrians have been fatally injured on this very corridor.” Planning Director judy Daniel responded that city reorganizations due, in part, to tight budgets have left staff with less time for community and neighborhood groups like EAST. She agreed, however, that development planning in the Tunnel Road area should be a major priority. Manheimer noted that the ongoing Haywood corridor study required extra funds from the city for both staff time and consultant fees. Many Council members, especially chris pelly, a former East Asheville neighborhood activist, praised Martin-Engel and EAST for bringing the issue to their attention. Council referred the matter to its Planning and Economic Development committee for further discussion. That committee is composed of Manheimer, Hunt and Bothwell. Its next meeting is scheduled for Jan. 21. cLosED sEssion Council ended with a closed session to discuss an undisclosed personnel matter, but didn’t have a key staff member present (the member wasn’t specified, but City manager gary jackson was absent that day. Council was to reconvene at 3:30 p.m., Dec. 17, for holding that closed session. Council members reported that they expected no public action at that meeting. X NORTH CAROLINA STAGE COMPANY PRESENTS Jacob Marley’s Christmas Carol “You know his story, but you donʼt know the whole story...” By: Tom Mula DIRECTED BY ANDREW HAMPTON LIVINGSTON STARRING MICHAEL MACCAULEY “Scrooge? I have to redeem old Scrooge? The one man I know who was worse than I? Impossible!” So begins the journey of Jacob Marley’s heroic behind-the-scenes efforts to save old Scrooge’s soul— and in the process, save his own. Free Gutter Cleaning 828-565-1984 Pruning • Removal Installation • Consultation with $500 purchase December 11 - December 29 Opening Night is Pay What you Can Night NCSTAGE.ORG • 828.239.0263 15 Stage Lane — Downtown Asheville! mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 11 See What Our Customers Say: 828-565-1984 nEws compiled by Jake Frankel From the News Desk tuRnER announcEs nc statEhousE BiD against moffitt Former UNC Asheville Assistant Vice Chancellor Brian turner (D) announced Dec. 12 that he’s running for the North Carolina Statehouse in 2014 against two-term incumbent Rep. tim moffitt (R). If any other Democrats decide to run, there would be a Democratic primary to see which candidate challenges Moffitt. No other Democrats have declared their intention to run at this time. In a press release announcing his political debut, Turner focuses on his local roots as well as education issues. . “We need to be strengthening our schools, community colleges, and universities or we will no longer be competitive in attracting the kind of stable, well-paying jobs this region was built on.” The Biltmore Forest resident is a small business owner and former head of manufacturing and operations for family business Mills Manufacturing. Moffitt soon responded to the news via a Twitter message: “I’d like to personally welcome Brian Turner to the race,” he wrote. “I’m looking forward to what I’m sure will be an informative campaign.” —– Jake Frankel fiRE at foRmER Dickson sitE The former site of Isaac Dickson Elementary School at 125 Hill St. suffered from a fire Dec. 12. No injuries were reported in the blaze, although its wafting smoke caught the attention of people across Asheville. The facility is in the process of being demolished and caught fire when torches used to cut steel beams burned the ceiling of the auditorium. No students were at the facility when the fire broke out. It was contained by firefighters after a few hours, although nearby streets were closed for much of the day, creating traffic problems. — staff reports X and with it comes special hazards to our furry, feathered, and scaled family members. REACH and AVS are here 24/7 to help during the holidays! Check out our website for tips on winter hazards to avoid QUALITY CARE WITH COMPASSION e Holiday Season is here... aRtfuL mEssagE: Part of a cross-country protest, GMO-labeling activists parked five “art cars” outside the French Broad Food Co-op on Dec. 12. Photo by Lea McLellan pEtition sEEks LaBELing of gmos in ingLEs maRkEts Dec. 12., goofylooking.” Meanwhile, citizens are collecting signatures for a petition to regional supermarket chain Ingles Markets 24 hours/day • 7 days/week 677 Brevard Road, Asheville See all we do: 12 DEcEmBER 18 - DEcEmBER 24, 2013 828-665-4399 asking the store to label genetically modified food products, and to promote the labeling of these products to Monsanto Company and the Grocery Manufactures Association. “If the grocery chains start speaking up about GMOs like they’ve done in other countries, that’s when you’ll begin to see an effect,” said Deb criss, a local resident who organized the petition through MoveOn.org. The petition, titled “You Can Do It Ingles!,” went live shortly after Thanksgiving and as of Dec. 13 has gained more than 2,000 signatures, according to organizers. Anyone in the Ingles service area can sign the petition, which will eventually be presented to the Ingles corporate office in Black Mountain. Criss said Ingles has been accepting literature and listening to arguments from both sides of the GMO labeling argument; she credits the grocery chain with being receptive to community input. The company is committed to “continuing to monitor the conversation nationally and among manufacturers and retailers on the topic of genetically engineered crops and ingredients, as well as relevant scientific studies and research,” Ingles CFO Ron Freeman said in a statement provided to local media. — Lea McLellan and Carrie Eidson mountainx.com I stopped by Volkswagon of Asheville on my way to the North Pole to check out their vintage Volkswagon models. Ho, Ho, Ho! How cool is this place! I have always enjoyed Christmas in Asheville and Western North Carolina - now I can’t wait to stop by Harmony Motors every year. Besides the cars, I love to check out their neon signs, local artwork, vintage gas pumps and more! And the folks who work here are top notch! I would trust them to work on my sleigh. ... Wishing your family a happy and safe holiday season!. Growing Families‌ DEcEmBER 18 - DEcEmBER 24, 2013 13 mountainx.com C O M M U N I T Y DEc. 18 - DEc. 24,. Cityscapes by Ben Aronson will be displayed until March at the Asheville Art Museum. A guided tour will be held Friday, Dec. 20 as part of the Lunchtime Art Break series. (p. 14) AnimAls Pet Food drive • Through (12/24), 8am-8:30pm - The Animal Hospital of North Asheville, 1 Beaverdam Road, will host a pet food drive. Canned or dry pet food, blankets, pet toys and/or monetary donations will be accepted. Info: ahna.net. Morton: An Uncommon Retrospective will be on display in Galleries A and B. Art At AsU Exhibits take place at Appalachian State University's Turchin Center for the Visual Arts, unless otherwise noted. Tues.Thurs. & Sat., 10am-6pm; Fri., noon8pm. Donations accepted. Info: tcva.org or 262-7 in the AirPort GAllery Located on the pre-security side of the Asheville Regional Airport terminal. Open to the public during the airport’s hours of operation. Info: art@flyavl.com or flyavl.com. • Through FR (1/3) - The gallery's 19th exhibition will feature works from six local artists. Asheville AreA Arts coUncil GAllery 346 Depot St. Tues.-Sat., 11am-4pm. Info: ashevillearts.com or 258-0710. • Through FR (1/24) - A Girl and A Gun: Asheville Artists Cope With Love and Death Asheville Art mUseUm Located on Pack Square in downtown Asheville. Tues.-Sat., 10am-5pm and ONGOING - Esteban Vicente: The Art of Interruption will feature paintings, drawings and collages. • Through SU (3/9) - Cityscapes, works by Ben Aronson. • FR (12/20), noon-1pm - Lunchtime Art Break: guided tour and discussion of Cityscapes by Ben Aronson. bellA vistA Art GAllery 14 Lodge St. Hours: Mon., Wed., & Thurs., 11am-4pm; Fri. & Sat., 11am5pm. Info: bellavistaart.com or 768-0246. • Through FR (1/31) - Works by Karen Jacobs and photographs by Paul Owen. blAck moUntAin center For the Arts 225 W. State St., Black Mountain. Mon.Fri., 10am-5pm. Info: BlackMountainArts. org or 669-0930. • Through (1/24) - Clay studio exhibit and ceramics sale in the Upper Gallery. Free. blAck moUntAin colleGe mUseUm + Arts center The center, which preserves the legacy of Black Mountain College, is located at 56 Broadway St., Asheville. Tues. & Wed., noon-4pm; Thurs.-Sat., 11am5pm. Info: blackmountaincollege.org or Art AbstrAct PAstels • Through TH (12/19) - Abstract Pastels, paintings by Bridget Risdon Hepler, will be on display at The Junction, 348 Depot St. #190. Info: thejunctionasheville.com or 225-3497. Art At APPAlAchiAn stAte University 423 W. King St., Boone. Info: tcva.org or 262-3017. • ONGOING - Photographs by Hugh 14 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com 350-8484. • ONGOING - Shaping Craft and Design. blUe sPirAl 1 38 Biltmore Ave. Mon.-Sat., 10am-6pm, and Sun., noon5pm. Info: bluespiral1.com or 251-0202. • Through TU (12/31) - A group show will feature ceramics by Josh Copus and Marlene Jack, photography by John Dickson and paintings by Peggy N. Root. cAstell PhotoGrAPhy 2-C Wilson Alley. Tues.-Sat., by appointment. Fri. & Sat., 11am6pm. Info: castellphotography. com or 255-1188. • Through SA (1/11) - NEXT: New Photographic Visions. elements sPA And shoP 29 W. French Broad St., Brevard. Hours: Sat.-Wed.: 9am-6pm. Thu: 9am-7pm. Info: 884-2701. • Through WE (1/8) - Paintings by Karen Keli Brown. Folk Art center MP 382 on the Blue Ridge Parkway. Open daily from 9am6pm. Info: craftguild.org or 298-7928. • Through TU (1/28) - Book arts by Annie Fain and fiber wearables by Martha Owen will be on display in the Focus Gallery. FoUndry 92 Charlotte St. Hours: Mon.Sat., 10am-6pm. Info: digfoundry.com. • Through TU (12/31) - Talula Love Bottoms: Echoes Collection by Maryanne Pappano. GAllery 86 86 N. Main St., Waynesville. Mon.-Sat., 10am-5pm. Info: haywoodarts.org. • Through SA (12/28) - It’s a Small, Small Work featuring Matthew Zedler and others. Grovewood GAllery Located at 111 Grovewood Road. April-Dec. Mon.-Sat., 10am-6pm & Sun., 11am-5pm. Info: grovewood.com or 2537651. • Through TU (12/31) - Beauty from Wood: Natural and Paper Forms, bowls and vessels by Bill Luce and paper works by Leo Monahan. n.c. ArboretUm Located at 100 Frederick Law Olmsted Way. 9am-5pm daily. Info: ncarboretum.org or 6652492. • ONGOING - A LEGO brick sculpture exhibit will feature works by Sean Kenney. PUsh skAte shoP & GAllery Located at 25 Patton Ave. Mon.Thurs., 11am-6pm; Fri. & Sat., 11am-7pm; Sun., noon-6pm. Info: pushtoyproject.com or 225-5509. • Through FR (1/3)- The Crossroads a multi-media exhibit by Adam Void. seven sisters GAllery 117 Cherry St., Black Mountain. Mon.-Sat.: 10am-6pm; Sun.: noon-5pm. Info: sevensistersgallery.com or 669-5107. • Through SU (3/16) - Acrylics and oils by Bridgette MartinPyles.. Theatre, 55 E. Jordan St., Brevard, will hold auditions for Agatha Christie’s A Murder Is Announced. Seven females and five males are needed. Info and appointments: 685-0545 or 240463-5542. (1/17) Submissions will be accepted for the 2014 Rose Post Creative Nonfiction Competition, open to residents of NC and NCWN members. $12/ $10 for members. yoUth tAlent comPetition • Through WE (1/15), 5pm Young artists may sign up for Transylvania Community Arts Council's Performing Arts Talent Competition, held Jan. 31. Ages 10-17. $5 application fee. Info: tcarts@comporium.net or 884-2787. {re}hAPPeninG cAll For Artists • Through WE (1/1) - Black Mountain College Museum & Arts Center's {Re}Happening seeks artists for the annual event, which recreates the "happenings," or artists gatherings, at BMC. Info: rehappening.com. Art/crAFt FAirs AnnUAl christmAs Arts & crAFts show • WE (12/18), 9:30am-4:30pm A Christmas arts & crafts show will be held at the Old Armory Recreation Center, 44 Boundary St., Waynesville. Free to attend. Info: 456-9207. solstice crAFt sAle • SA (12/21), 11am-7pm Solstice Craft Sale will be held at 307 Waynesville Ave. Local artists will be present to discuss their works. Info: ceramicsong@ gmail.com or 423-7119. tcAc's sAntA's PAlette • Through FR (12/20), 9:30am-4:30pm - Transylvania Community Arts Council, 349 S. Caldwell St., Brevard, will hold "Santa's Palette," a holiday show and sale. Free to attend. Info: artsofbrevard.org or 8842787. tryon holidAy GiFt show • Through TU (12/24), 9am4pm - Tryon Arts and Crafts, 373 Harmon Field Road, Tryon, will hold its Holiday Gift Show. Info: tryonartsandcrafts.org or 859-8323. beneFits "home For the holidAy's" FUndrAiser For locAl non ProFits (pd.) This Thursday, December 19, 4pm-10pm. Town and Mountain Realty hosts 2nd Annual FUNdraiser at the orange Peel. Proceeds to benefit local charities including Manna, WNC Alliance, Helpmate, Eblen Charities, Asheville Humane Society, Habitat and More! Visit from Santa for the kids 4-6, good food, DJ dance party. • Sponsors and donations needed and appreciated! Website:. com/h4h/ • Contact Town and Mountain Realty: (828) 232-2879 for more information. AUditions & cAll to Artists mUrder mystery AUditions • SA (12/21) & SU (12/22), 1-3pm - The Brevard Little mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 15 COMMUNITY CALENDAR by Hayley Benton & Carrie Eidson Send your event listings to calendar@mountainx.com. C P O A N R S T C Y I O Fun fundraisers U S Bikes For Kids • Through SA (12/21) - Donations of new and gently used bicycles and cash will be accepted and donated to children in Buncombe and Madison counties at Weaverville Tire and Wheel, 183 Old Mars Hill Highway, Weaverville; and Fast Lane Auto Sales, 318 Weaverville Highway, Weaverville. Info: slhiggins32@gmail.com, 768-7423 or 645-8330. CAN'd Aid Benefit at Pisgah Brewing • SA (12/21), 8pm - Pisgah Brewing and Oskar Blues will host the CAN'd Aid Benefit concert and event for Colorado flood relief efforts at Pisgah Brewing's taproom, 150 Eastside Drive, Black Mountain. $12/$10 in advance. Info: pisgahbrewing. com. Deck the Trees • Through TU (12/31) - Deck the Trees, a display of decorated Christmas trees to benefit the Swannanoa Valley Christian Ministries, will be held at The Monte Vista Hotel, 308 West State St., Black Mountain. Free to attend with donations encouraged. • FR (12/20), 5-8pm- Christmas Party LEAF Schools and Streets • WEDNESDAYS, 5-7pm - Wine tasting and jazz, to benefit LEAF Schools and Streets, will be held at 5 Walnut Wine Bar, 5 Walnut St. $5. Info: theleaf.org or Jocelyn@theLEAF .org. Neighbors In Need Benefit Concert • TH (12/19), 7-9pm - A holiday music concert to benefit Neighbors in Need, a food pantry which also helps with heating bills, will be held at Marshall Presbyterian Church, 165 Main St., Marshall. Donations encouraged. Info: marshallpres.org.. munity music school, offers private lessons and group instruction for all instruments, voices and styles. 126 College St. Info: 252-6244. Comedy Disclaimer Comedy Info: disclaimercomedy.com. • WEDNESDAYS, 9pm Disclaimer Stand-Up Lounge open mic is held at the Dirty South Lounge, 41 N. Lexington Ave. Free. • FRIDAYS, 7-8pm - Disclaimer Comedy presents weekly stand-up at Metro Wines, 169 Charlotte St. $10 includes a glass of wine. Info: disclaimercomedy. com. Slice of Life Comedy A comedy showcase held at Pulp, below the Orange Peel, 103 Hilliard Ave. Info and booking: sliceoflifecomedy@gmail. com. • SU (12/22), 7:30pm - Stand-up comedy and booked open mic holiday show. All proceeds going to Eliada Homes. $10 or free with donation of a toy of equal or greater value. Classes, Meetings & Events Cribbage Gathering • MONDAYS, 6pm - A weekly cribbage game will meet at Atlanta Bread Company, 633 Merrimon Ave. All levels welcome. Free. Info: peter.ely@ gmail.com. Four Seasons Toastmasters • WEDNESDAYS, 8-9am - Four Seasons Toastmasters will meet at Lake Pointe Landing, 333 Thompson St., Hendersonville. Info: fourseasonstoastmasters. com. Henderson County Heritage Museum Located in the Historic Courthouse on Main St., Hendersonville. Wed.-Sat., 10am-5pm; Sun., 1-5pm. Free unless otherwise noted. Info: hendersoncountymuseum.org or 694-1619. • Through TU (12/31) - Golden Age: Coming of the Railroad exhibit will includes replicas and relics. Holiday Events • FR (12/20), 5-9pm - Downtown Brevard art galleries, stores and restaurants will stay open late to accommodate holiday shopping. Free to attend. Info and locations: 884-2787 or artsofbrevard. org. nonprofit com- ‘Tis the Season It’s a notion heard time and time again as the last few weeks of the year wind down into a chaotic frenzy of last-minute shopping lists and panic over the perfect present: December is the season for giving. But don’t be overwhelmed by the seasonal spirit. Xpress has compiled a list of a few opportunities for giving back to local residents — of all species — this holiday season. Donations for Kids Give a child the gift of a good read: Through its annual Giving Tree program, City Lights Bookstore, 3 E. Jackson St., Sylva, will offer a 20 percent discount on books for children in need. Eblen Charities will accept toys for the Ingles Toy Store at Westgate Shopping Center through Christmas Eve as part of the Saint Nicholas Project. You can find a full list of the many drop-off locations at eblencharities.org. Bikes for Kids will collect new and gently used bicycles for children in Buncombe and Madison counties through Saturday, Dec. 21. Drop-off locations are Weaverville Tire and Wheel, 183 Old Mars Hill Highwa,y and Fast Lane Auto Sales, 318 Weaverville Highway. Animals in Need WCQS will collect bagged or canned animal chow for Asheville Humane Society and Brother Wolf Animal Rescue. Donations can be brought to the station’s office at 73 Broadway. Donations for homeless animals can also be made to The Animal Hospital of North Asheville, 1 Beaverdam Road. Canned or dry food, blankets, toy or monetary donations will be accepted. Do Good With Good Food Loving Food Resources invites volunteers to bring some sweetness into the lives of HIV/ AIDS patients and others in home hospice. The organization is asking for cookie donations to fill 200 gift boxes for its clients. Donations can be brought to Kenilworth Presbyterian Church, 123 Kenilworth Road, on Friday, Dec. 20. And don’t forget, there’s still time to grab a tasty snack for a good cause: Eight Days of Food Trucks, where one food truck per day donates a percentage of profits to Buncombe County Service Foundation to support children in foster care, will continue through Dec. 20. Visit the Bom Bus on Dec. 18, El Kimchi on Dec. 19 or Taste and See on Dec. 20 to participate. Dance Beginner Swing Dancing Lessons (pd.) 4 week series starts first Tuesday of every month at 7:30pm. $12/week per person. • No partner necessary. Eleven on Grove, downtown Asheville. Details:.. Line Dance Classes • WEDNESDAYS, 9-10:30am Henderson County Department of Parks and Recreation will host beginner classes in line dancing. Held at the Athletics and Activity Center, 708 South Grove St., Hendersonville. Registration required. $5 per class. Info: linedanceclass.com or 890-5777.. Goodwill Career Classes Info and registration: 298-9023, ext. 1106. • ONGOING - Classes for those interested in careers in the food and hotel industries. Hands-on 16 DECEMBER 18 - DECEMBER 24, 2013 mountainx.com moUntAin shAG clUb • TUESDAYS - The club meets weekly at Fred's Speakeasy, 2310 Hendersonville Road, Arden. Free lessons from 6:307pm. Shag DJ from 7-10pm. $5. Info: mountainshagclub.com. old FArmer's bAll contrA dAnce Held at Warren Wilson College, 701 Warren Wilson Road, Swannanoa, in Bryson Gym. Beginner's lesson at 7:30pm. $6/$5 OFB members/$1 Warren Wilson students. Info: oldfarmersball.com. • TH (12/19), 8pm - Boom Chuck will perform. • TH (12/26), 8pm - Contraforce will perform. tAnGo lesson • FR (12/20), 6:30-10pm - Tango lessons held at French Broad Food Co-op's Movement & Learning Center, 90 Biltmore Ave. Beginners: 6:30-8pm. $20/ $10 students. Info: tangogypsies.com U.S. 25. Info: nps.gov/carl or 693-4178. • SA (12/21), 11am - Pat Corn will sing and play holiday music on the guitar. christmAs At the FArm • Through (12/21), 10am4pm - Christmas at the Farm at Sycamore Farms, 764 S. Mills River Road, Mills River, will include a reading of the Christmas Story and craft demonstrations. $8/$4 children under 3. Additional cost for tour. Info: 891-2487. coloniAl christmAs • SA (12/21), 10am-5pm Davidson’s Fort, Lacky Town Road, Old Fort, will host "Colonial Christmas," with recreations of 18th century decorations and historic re-enacters. Admission by donation. Info: davidsonsfort.com holidAy tAilGAte mArkets • Through WE (12/18), 2-6pm - Weaverville Tailgate/ Holiday Market will be held outside the Weaverville Community Center, 60 Lakeshore Drive, Weaverville. Includes food vendors, artisans, and craft vendors. Free to attend. Info: weavervilletailgate. org. • Through SA (12/21), 10am2pm - Madison County Farmers and Artisans Holiday Market will be held in the lower level of Fiddlestix, 37 Library St., Mars Hill. Includes food and craft vendors. Free to attend. Info: info@ marshillmarket.org. JinGle bell trolley trAin • SA (12/21), 4-8pm - Jingle Bell Trolley Train, a holiday-themed train to benefit the Craggy Mountain Line Railroad, will run on a 3-mile section of the historic line. $10/ children under 3 free. Departs from the station at 111 N. Woodfin Ave. on the hour. liGhtinG oF the Green • Through FR (12/20), 6-8pm A-B Tech's Lighting of the Green will feature historic homes on the school's Asheville campus decorated for the season. Free. Info: abtech.edu. oPerAtion toAsty toes chAPter 7 Makes yarn comfort items that are sent to troops deployed overseas. Info: Info@ OperationToastyToes.org or operationtoastytoes.org. • Through TU (12/31) Operation Toasty Toes will display Christmas trees dedicated to members of the armed forces at select Henderson County libraries. Families of soliders are encouraged to provide a photo to Chapter 7 for inclusion. Info: operationtoastytoes.org or 696-9777. school choirs At Avl AirPort School choirs will perform on the pre-security side of the Asheville Regional Airport, 61 Terminal Drive, Fletcher. Free to attend. • WE (12/18), 10:15am - Rosman High School Choir • WE (12/18), 11am - Clyde A. Erwin High School Choir • TH (12/19), 2pm - Enka Middle School Choir GArdeninG tAilGAte mArkets • thUrsdAys • 8am-2pm - henderson county curb market, 221 N. Church St., Hendersonville. Ends Dec. 31. sAtUrdAys • 6am-noon - caldwell county Farmers market, 120 Hospital Ave., N.E., Lenoir. Ends Dec. 21. 9am-noon - Jackson county Farmers market, 23 Central St., in the Community Table, Sylva. Through March. tUesdAys • 8am-2pm - henderson county curb market, 221 N. Church St., Hendersonville. Ends Dec. 31. dAily • 8am-6pm - wnc Farmers market, 570 Brevard Road. Ongoing. eco Asheville Green drinks A networking party which meets to discuss pressing green issues. Info: ashevillegreendrinks.com. • WE (12/18), 5:30pm - The group will meet at Green Sage Coffeehouse, 5 Broadway. Emily Coleman-Wolf will present Green Building Council’s "Living Building Challenge." Free to attend. sinG For the climAte • 3rd SATURDAYS, 5pm Asheville's Green Grannies invites the public to "Sing for the Climate" at Vance Monument downtown. Info: avl. mx/prph. FestivAls brevArd GAllery wAlks A variety of Brevard galleries and art spots open their doors. Info: artsofbrevard.org. or 8842787. • FR (12/20), 5-9pm - The Brevard Holiday Gallery Walk will be held in downtown Brevard. Galleries, restaurants and stores will have extended hours. cArl sAndbUrG home holidAy events Musicians and storytellers will perform every Saturday from Thanksgiving to New Years. Located at 81 Carl Sandburg Lane, Flat Rock, three miles south of Hendersonville off JoinUs for A Stan Kenton Christmas Hymns and Carols performed in Big Band Jazz style by the Asheville Jazz Orchestra a Benefit for Hall Fletcher Elementary School and the Asheville Jazz Orchestra Government & Politics henderson coUnty democrAtic PArty Headquarters are located at 905 Greenville Highway, Hendersonville. Info: myhcdp. com or 692-6424. • 3rd WEDNESDAYS, 11:30am - The Henderson County Senior Democrats will meet at HCDP Headquarters. Bring a bagged lunch. Info: info@myhcdp.com or 692-6424. • WE (12/18), noon - A meet- This Friday, December 20 7pm • Free Admission Trinity United Methodist Church • 587 Haywood Road West Asheville • 253-5471 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 17 COMMUNITY CALENDAR by Hayley Benton & Carrie Eidson Send your event listings to calendar@mountainx.com. Holiday Concert • TH (12/19), 7:30pm - Cantaria, The Gay Men's Chorus of Asheville, will perform a concert of seasonal music from around the world. Held at the Cathedral of All Souls, 9 Swan St. Open dress rehearsal: Sun. Dec. 15, 4 pm. Info: CantariaAsheville.org. Canton Middle School Chorus in Concert at Canton Branch Library • WE (12/18), noon - Canton Middle School Chorus will perform in the auditorium of the Haywood County Canton Branch Library, 11 Pennsylvania Ave., Canton. Info: 648-2924. Christmas Festival at First Baptist Church • SU (12/22), 7-8pm - First Baptist Church of Asheville, 5 Oak St., will host a performance by the the church's adult, youth, and children’s choirs, accompanied by a 40-piece orchestra. in Sylva. All proceeds benifit the Community Table, a nonprofit food pantry. $30/$25 advance. Info: barwatt@hotmail.com or 506-2802. Lake James State Park 6883 N.C. Highway 126, Nebo. Programs are free unless otherwise noted. Info: 584-7728. • SA (12/21), 10am - Park Ranger Jamie Cameron will lead an shoreline walk searching for winter birds. Meets at the Paddy's Creek Area bathhouse breezeway. • SU (12/22), 2pm - Park Ranger Jamie Cameron will lead a program for parents and children ages 5-12. Meets at the Paddy’s Creek Bridge Area bathhouse breezeway. Winter Solstice Night Hike • SA (12/21), 7-9pm - Winter Solstice Night in the DuPont State Recreational Forest will feature a hike to Hooker Falls in the darkness of the longest night. Meets at the Hooker Falls parking lot on DuPont Road, Hendersonville. Info: 692-0385 House, 227 Edgewood Ave. (off Merrimon). Donation. Info: 2583241.. Sundays, 10am11:30pm. 29 Ravenscroft Dr., Suite 200, Asheville. (828) 8084444,. com".) Experience living from the natural connection to your heart and the results of joy, peace and love that emanates from within. Tues. 7-9 PM, 5 Covington St. Love offering, heartsanctuary. org 296-0017. Asheville Spiritual Radio • Saturdays, 1pm (pd.) “Guidance For Your Life” a talk show that explains spiritual wisdom. We guide you through the process of incorporating spiritual lessons into your daily life. 880AM. 880TheRevolution.com 1800s Christmas Eve Candlelight Service • TU (12/24), 5pm - Asbury Memorial United Methodist Church, 171 Beaverdam Road, will hold an 1800's Christmas Eve Candlelight Service. Free. Info: 253-0765. Celtic Christian Holiday Service • SA (12/21) - Honor the Winter Solstice during this service at a private home in Weaverville. An optional vegetarian potluck will be held after the service. Info and location: avalongrove.org or 645-2674. Asheville Green Drinks, a networking event for people interested in environmental issues and topics, will meet at Green Sage Coffehouse. Representatives from WNC Green Building Council will discuss living buildings on Wednesday, Dec. 18. (p.17) ing will be held at the HCDP Headquarters. Social and BYO Lunch at 11:30 a.m. Free. Kids Asheville Art Museum Located on Pack Square in downtown Asheville. Tues.-Sat., 10am5pm and Sun., 1-5pm. Programs are free with admission unless otherwise noted. Admission: $8/$7 students and seniors/ children under 4 free. Free first Wednesdays from 3-5pm. Info: ashevilleart.org or 253-3227. • TH (12/26) - FR (12/27) & MO (12/30), 1-4pm - Holiday Art Camp with hands-on art activities. $20/$18 for members, per day. Holly-days at Hands-On! A month-long educational event with a wintery wonderland & holiday theme. Held in Hands On! A Child's Gallery, 318 N. Main Street, Hendersonville. Hours: Tues.-Sat.: 10am-5pm. $5 admission not included. Info: handsonwnc.org or 697-8333. • WE (12/18), 2-3:30pm - A reading of the book The Mitten by Jan Brett followed by a hands on science class about heating. First class of two part series. Ages 6 & up. $8/$2 for members. • TH (12/19), 2-3:30pm - A reading of the book The Mitten by Jan Brett followed by a hands on science class about heating. Second class of two part series. Ages 6 & up. $8/$2 for members. • THURSDAYS, 4-4:30pm - "Yuletide Shake, Rattle, & Rhythm," will teach simple rhythms on different instruments. Ages 5 & up. • TH (12/19), 10:30-11:30am - Elves Workshop: Wooden Gifts. Ages 5 & up. $7/$2 for members. • FRIDAYS, 10:30-noon & 2-4pm - Winter Arts & Crafts • FR (12/20), 2-3:30pm Gingerbread House workshop. Ages 8 & up. Registration required. $8/$2 for members. • FR (12/20), 10:30am-noon Elves Workshop: Felted Bead Gifts. Ages 8 & up. $7/$2 for members. • MO (12/23), 10am-noon & 2-5pm - Gingerbread cookie decorating with the Hendersonville Community Co-op. $8/$2 members. • TH (12/26) through TU(12/31), 10am-5pm - "The Spirit of Kwanzaa," will include self-directed educational activities. Free with admission. Play and Learn for Infants and Toddlers • TUESDAYS, 10:30am & THURSDAYS, 10 & 11am - An 8-week series of pre-literacy classes for parents and children from Buncombe County. Tuesdays, ages 3-12 months; Thursdays, ages 13-35 months. Free. Info, location and registration: grace.ragaller@asheville.k12. nc.us. or 350-2932. Hendersonville Symphony Orchestra Info: hendersonvillesymphony. org or 697-5884. • SA (12/21), 3pm & 7:30pm - A performance of “A Carolina Christmas" will be held at Ridge Community College Conference Hall, Flat Rock. Performances at Diana Wortham Theatre Located at 2 South Pack Square. Info: dwtheatre.com or 2574530. • SU (12/22), 2-7pm - A Swannanoa Solstice will perform a holiday concert, accompanied by local storytellers, dancers and guest musicians. $38/$33 students/$15 children. Saxophone Christmas Concert • SU (12/22), 3pm - The Lenior Saxophone Quartet will perform a Christmas concert at St. Matthias Episcopal Church, 1 Dundee St. Donations will be accepted for restoration of the church. Info: leniorsax.org Thomas Wolfe Memorial Located at 52 N. Market St. Info: wolfememorial.com or 253-8304. • SA (12/21), 6pm & 7:30pm "Christmas On the Mountain" with Appalachian balladeer and folklorist Shelia Kay Adams. $10. www. ashevilleccc66-824-9547 42nd Street Jazz Band • SATURDAYS, 6-9pm - The 42nd Street Jazz Band will perform at Kelsey's Restaurant and Lounge, 840 Spartanburg Highway, Hendersonville. Free. Info: 6939393. Brio Concert Series • TH (12/19), 7-9pm - The Wendy Jones Quartet, pianist Sarah Fowler, vibraphonist Jason DeCristofaro and the Joyful Noise String Ensemble will perform an evening of holiday songs at the Weaverville First Presbyterian Church, 30 Alabama Ave., Weaverville. Free. Info: fcpweaverville.org or 273-8254. Cantaria International Outdoors Assault on Black Rock Registration • Through SU (3/22) Registration is open for the "Assault on Black Rock" a 7-mile trail race up Black Rock, located 18 DECEMBER 18 - DECEMBER 24, 2013 mountainx.com JOHN’S 254-6775. • SU (12/22), 11am-12:30pm Spiritual Laws of Life Workshop entitled "The Law of Connecting Diamonds." Free. First conGreGAtionAl chUrch in hendersonville Fifth Avenue West at White Pine Street, Hendersonville. Info: 6928630 or fcchendersonville.org. • TU (12/24), 5pm Christmas Eve Candlelight Service of Advent Lessons & Carols. GrAce lUtherAn chUrch 1245 Sixth Ave. W., Hendersonville. Info: gracelutherannc.com or 693-4890. • WEDNESDAYS - Special Advent worship services will be held the three Wednesdays in December before Christmas. A light supper will be served in Stull Hall from 4:45-5:30pm; the service will start at 6pm. Reservations required for the meal; donations encouraged. • TU (12/24), 5pm - The church will hold a series of Christmas Eve services. 5pm: bilingual service. 7pm & 9pm: candlelight service. Donations for the women's shelter will be accepted. kirtAn ceremony • TUESDAYS, 7-8:30pm - Kirtan with Sangita Devi will be held at Nourish and Flourish, 347 Depot St. $10-$15 donation. Info: sangitadevi.com. liGht center 2196 N.C. Highway 9 S., Black Mountain. Info: urlight.org or 669-6845. • DAILY, 10am-4:30pm - Chakra balancing light sessions. Donations accepted. • DAILY - Seven Circuit Classical Labyrinth. Daylight hours. • SA (12/21), 3-5pm - Solstice Meditation & Kirtan with Isham. Free. • (12/21), noon-1:45pm Solstice crystal bowls toning circle. $20 or by donation. sisters on the JoUrney • WEDNESDAYS, 6:30-8:30pm - Sisters on the Journey women's circle will focus on living genuine, wholehearted and empowered lives. $10. Info and location: 13moons.info or 13mo: theopeningheart.sec@gmail. com or 335-3540. Bobcat, Mini-Excavator & Dump Truck Service Try Kangen Water FREE Alkaline, Ionized, Mineral Charged Reverse & Prevent Disease Dramatically Increase Energy Boulders • Retaining Walls Grading • Landscaping SEE WHAT MY CLIENTS SAY: 30 Day Self Health Study Free Saliva pH Test Kits An Overly Acidic Body is the Root Cause of Most Disease GUYWITHMACHINES.COM Responsible Site Work at Reasonable Prices AshevilleKangenWater.com 828.989.6057 CALL JOHN (828) 777-1967 UN ES MATTRESS MAN ANNOUNC S! LE SA Y A D LI O H D TE N E D E C E R P GEL MEMORY FOAM STAR TIN G AT quality brands, owned provider of ily fam al, loc a is for themselves. Mattress Man ents to investigate sid re ea ar le vil he d to and invites As ss Man is dedicate For 4 years, Mattre Give the Gift of Sleep $399 “SAVING ASHEVILLE, ONE NIGHT AT A TIME” ASHEVILLE 80 S. Tunnel Road (828) 299-4232 30 S. Airport Road (828) 687-2618 CLEARANCE CENTER ARDEN 1900 B. Four Seasons Blvd. HENDERSONVILLE (828) 693-9000 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 19 COMMUNITY CALENDAR by Hayley Benton & Carrie Eidson Send your event listings to calendar@mountainx.com. • Through SU(12/22) - "O Holy Night," a musical adaptation of the Nativity Story. Wed.-Sat: 8pm; Thur., Sat., & Sun. 2pm. $35/$33 seniors, AAA, military/$25 students. • WE (12/4) through SU (12/22) - A Christmas Story will be held on the MainStage. Wed.-Sat.: 2pm & 8pm; Sun: 2pm. $35. Hendersonville Little Theatre 229 S. Washington St., Hendersonville. Info: 692-1082 or hendersonvillelittletheater. org. • Through SU(12/22)- The Gifts of The Magi, a musical from the stories by O.Henry. Thur.-Sat.: 7:30pm; Sun.: 2pm. $20/$10 under age 18. Montford Park Players Volunteering American Cancer Society • WEEKDAYS, 9am-1pm - The American Cancer Society seeks volunteers to provide information to cancer patients and their families. Orientation and screening required. - Special Events • Aurora Studio, a planned collective art space for artists affected by mental illness, homelessness and/or addiction, needs volunteers for planning fundraisers in 2014. Info: aurorastudio-gallery.com or 335-1038. 18 and older to share outings in the community twice a month with youth from singleparent homes. Activities are free or low-cost, such as sports, local attractions, etc. Volunteers age 16 and older are needed to mentor one hour per week in schools and after-school sites. Information session: Dec. 12, noon. Hands On AshevilleBuncombe Registration required. Youth are welcome on many projects with adult supervision. Info: hand- Slice of Life Comedy will hold its Holiday Comedy Showcase at Pulp, below the Orange Peel, on Sunday, Dec. 22. The event includes a toy drive for Eliada Homes foster children, with proceeds benefiting the organization. (p.16) Spoken & Written Word Holiday Pop-Up Shop • Through SA (1/4) - Asheville BookWorks will host A Gift of Art, with handmade book and print-related items, at 428 1/2 Haywood Road. Tues.-Fri., 1-5pm; Sat. & Sun. 1-4pm. Info: ashevillebookworks.com. Spellbound Children's Bookshop 50 N. Merrimon Ave. Free, unless otherwise noted. Info: spellboundchildrensbookshop.com or 708-7570. • SATURDAYS, 11-11:30am Story time. Ages 2-6. WRITERS WORKSHOP HOLIDAY GATHERING SA (12/21), 4-6pm - The Writers’ Workshop, 387 Beaucatcher Road, will hold its annual holiday gathering. Reservations required by Dec. 19. Info and registration: twwoa.org or 254-8111. Theater Asheville Community Theatre Located at 35 E. Walnut St. Tickets and info: ashevilletheatre. org or 254-1320. • TH (12/19) through SU (12/22) - The Santaland Diaries, by David Sedaris will be performed Thurs.Sat., 7:30pm; Sun., 2:30pm. $15. Black Mountain Center for the Arts 225 W. State St., Black Mountain. Mon.-Fri., 10am-5pm. Info: BlackMountainArts.org or 6690930. • TH (12/19), 7:30 pm Rediscovering Christmas features Jim and Carol Anderson in a series of original vignettes about Christmas. $15 Events at 35below This black box theater is located underneath Asheville Community Theatre at 35 E. Walnut St. Info: 254-1320 or ashevilletheatre.org. • THURSDAYS - SUNDAYS through (12/22) - All in the Timing, six one-act comedies, will be performed by the Attic Salt Theatre Company. Thur.- Sat.: 7:30pm. Sun.: 2:30pm. $15 Flat Rock Playhouse Mainstage: Highway 225, Flat Rock. Downtown location: 125 South Main St., Hendersonville. Info: flatrockplayhouse.org or 693-0731. Unless otherwise noted, performances are free and take place outdoors at Hazel Robinson Amphitheater in Montford. Donations accepted. Info: montfordparkplayers.org or 254-5146. • Through SU (12/22) - The Montford Park Players will be performing A Christmas Carol at the Asheville Masonic Temple, 80 Broadway St. The Naughty List • FR (12/20) & SA (12/21), 8pm - "The Naughty List," a holidaythemed burlesque, will be performed at Toy Boat Community Art Space, 101 Fairview Road. Canned food donations for Loving Food Resources will be accepted. $12. Tickets and info: bombsawaycabaret.com. sonasheville.org or call 2-1-1. Visit the website to sign up for a project. • WE (12/18), 6-8:30pm Volunteers needed to make cookies for hospice patients at CarePartners' John Keever Solace Center. • SA (12/21), 9am-noon - Help sort and pack food at MANNA FoodBank for agencies serving hungry people in 17 WNC counties. • SA (12/21), 10am-noon Volunteers needed to copy and collate packets for distribution to individuals and families that benefit from OnTrack's various financial assistance programs. • MO (12/23), 7-8:30pm Volunteers needed to bake cookies for families staying at the Lewis Rathbun Center, which provides free lodging for out-of-town families who have a loved one in an area hospital. Supplies provided. • TH (12/26), 11am-12:30pm - Shake and Bake: Cook and serve a homemade lunch to the men staying at the ABCCM Veterans Restoration Quarters and Inn. Both men and women are encouraged to participate. Interfaith Assistance Ministry • ONGOING -. Orientation: Jan. 8 or 9.noon and Sat. 9am-11 and/or 11am-2pm. Help is needed with stocking, helping clients shop, driving, food box delivery, sort- Buncombe County Public Libraries LIBRARY ABBREVIATIONS - All programs are free unless otherwise noted. Each Library event is marked by the following location abbreviations: n FV = Fairview Library (1 Taylor Road, 250-6484) n SW = Swannanoa Library (101 West Charleston Street, 250-6486) • WE (12/18), 5pm - Swannanoa Knitters. SW • TH (12/19), 7pm - Book Club: The Alchemist by Paulo Coehlo. FV. City Lights Bookstore Located at 3 E. Jackson St., Sylva. Events are free, unless otherwise noted. Info: citylightsnc. com or 586-9499. • TH (12/19), 10:30am - The Coffee with the Poet Series. Bring your favorite Christmas poem to share. • FR (12/20), 6pm - Gary Carden will recite Appalachian Christmas tales. • SA (12/21), 6:30pm - The Shepherd of the Hills String Band will perform holiday music. Thriving Children Bedtime in a Bag Drive • Through SU(12/29) - North Carolina Stage Company, 15 Stage Lane, will be holding a "Bedtime In a Bag" drive for Children First/CIS. Items accepted: toothbrush/toothpaste, shampoo, underwear, fuzzy towels, pajamas and bedtime stories. Info: 768-2072... 20 DECEMBER 18 - DECEMBER 24, 2013 mountainx.com I N T H E S P I R I T by Jordan Foltz. Send your spirituality news to jfoltz@mountainx.com. Celebrating Yule This week, we will pass through the shortest day of the year and see the beginning of Christmas celebrations. For the occasion, Avalon Grove Celtic Christian Church in Weaverville shared some insights with Xpress about the upcoming winter solstice: Winter solstice, also known as Yule, occurs this year on Saturday, Dec. 21. It is on this day that the sun flees too early from the gray winter sky and the seemingly endless night begins. At dawn the next day, the sun is reborn and stays in the sky just a little bit longer. Therefore, on winter solstice, we celebrate the rebirth of the sun and the return of the sun’s light in a time of darkness. Yule is an old Anglo-Saxon word for yoke or wheel. It represents the turning of the seasons — in this case, from winter to spring. In the old days, entire families would go deep into the forest to select a tree to become the family’s Yule log for their hearth fire. Once it was found and cut, they would drag the large log back to the home. The Yule log burning in their hearth fire would, for a time, drive away the winter’s cold, filling the house with light and warmth. It was thought that the burning of the Yule log would set the sun free from the darkness of winter. As Celtic Christians, we celebrate the birth of the Christ child, who brings hope into our hearts for humanity and all creation. We should remember at this time of year to be kind to both humans and animals. We might also place birdseed outdoors for the birds, so that they, too, can enjoy a holiday feast. For more information on the winter solstice and other Celtic Christian holiday services held throughout the year, visit AvalonGrove.org. ing, internet related tasks, graphic design and office assistance. • FR (12/20), 7:30am-4pm Donations for LFR's "Cookie Party" will be accepted at Ace Hardware North, 812 Merrimon Ave. • FR (12/20), 6-8pm- LFR asks volunteers to bring homemade cookies to its annual "Cookie Party" and help fill gift boxes for its clients. Held at Kenilworth Presbyterian Church, 123 Kenilworth Road, in the Fellowship AdministrAtive sUPPort volUnteer • ONGOING - MemoryCare, a nonprofit dedicated to providing assessment, treatment and support for memory-impaired individuals and their families, seeks a volunteer administrative assistant 2-3 hours a week on Tue., Wed. or Thur. for general office duties. Info: alexander@memorycare.org. moUntAin hoUsinG oPPortUnities • Through (12/31) - The Mountain Housing Opportunities seeks low-to-moderate income families for its Self Help Home Ownership Program, "an alternative path to affordable homeownership." No construction experience or down payment required. Info: mtnhousing.org or 254-4030, ext. 122. oPerAtion sAntA • Through (12/24) - The Arc of Buncombe County seeks support for Operation Santa, a program which gives gifts to children and adults with intellectual and developmental disabilities. The program is seeking gift cards or monetary donations. Info: moreaboutthearc@arcofbc.org or 253-1255. ProJect linUs • ONGOING - The local chapter of Project Linus, a nonprofit which donates handmade blankets to children in crisis, seeks volunteers to create blankets. Knitted, cro- cheted, quilted, no-sew fleece or flannel blankets will be accepted. Info: 645-8800. Season’s Greetings from Nest Organics Small batch pickles and preserves Fermenting crocks and other handmade functional stoneware Cutting boards, salt boxes, serving dishes Your source for locally made and sustainable gifts for the holidays 51 North Lexington Ave Asheville, NC 828.258.1901 Mon-Sat 11-6 Sunday noon-4 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 21 Now there are 2 ways to cruise... Window wonderland PHOTOS BY EMILY NICHOLSwin’s won for “Best use of Theme”; Fired Up! Creative Lounge was awarded “Best use of Merchandise”; and Sensibilities Day Spa won for “Wow! Factor.” More info at ashevilledowntown.org. X Shuttle Service As easy as. . . GI VE A GREAT FT Buy the Ticket! Take the GI Ride! Beer City’s #1 Brewery Tour!! Book your tour online at... FOR THE BEER LOVER IN YOUR FAMI LY ! Gift Certificates are $2 Off on all tours through January! Asheville’s Original Brewery Tour EST. 2006 BLuE goLDsmiths hip REp LacEmE nts sEnsiBiLitiEs Da y spa 22 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com Asheville Arden Hominy Valley kiLwin’s Join our practice and see why families and individuals choose us as the number one family practice in WNC! WELCOMING TWO NEW PHYSICIANS Lisa Scott, MD in Asheville Brandan Adcock, DO in Hominy Valley Accepting new patients—children, adults and seniors Rium puRL’s yaRn Empo 828-258-8681 fhconline.com sensibilities-spa.com HOT STONE MASSAGE AND SUGAR BODY POLISH Give the Gift of Winter Warmth $ GIVE THE GIFT OF 145 BLoomin ’ aRt Relaxation Downtown • 59 Haywood St. 828.253.3222 mountainx.com massage • facials • nails • gift certificates South • Hilton Asheville Biltmore Park 828.687.8760 DEcEmBER 18 - DEcEmBER 24, 2013 23 W E L L N E S S Breaking the silence Our VOICE helps male victims of sexual abuse By LEa MCLELLan lmclellan@mountainx.com 251-1333 ext. 127 EzRa Post, the only male counselor at Our VOICE rape crisis center, coordinates the new One in Six project for men. Photo by Nichole Civiello 24 DECEMBER 18 - DECEMBER 24, 2013 Mountainx.CoM. X Mention this ad and receive a free present Massage & Wellness Thank You Asheville for Your Support Gift Certificates are Available 828-575-9166 • 211 Merrimon Ave. Ste 201 • Asheville • Tue-Sat 10-6, Sun 11-5 HAPPY HOLIDAYS visit us at massageasheville.net for a complete list of services The healthiest bar in Asheville! mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 25 wELLnEss caLEnDaR by Hayley Benton & Carrie Eidson Eating Right for Good Health presented by yoGA For the eyes (pd.) Fridays, 10:45-12:00—Natural vision improvement through Yoga, Qigong and the Bates Method. Nourish & Flourish, 347 Depot St. River Arts District. All Levels. Instructor: Nathan Oxenfeld. $12. integraleyesight.com UnderstAndinG the AFFordAble cAre Act (AcA) (pd.) Platinum Exchange is offering Free 30 minute public presentations on Understanding the Affordable Care Act (ACA) at the Asheville Chamber, 3rd floor. Mondays at 12:15pm, 1:15pm and 2pm and Wednesdays at 12:15pm. More info: platinumexchange.com PilAtes mAt (pd.) Monday 6:30pm, Wednesday 1:30pm, Thursday 6:30pm, Saturday 10:30am. Individualized, comfortable classes held at Happy Body. Call 2775741. Registration suggested $12, details at www. AshevilleHappyBody.com yoGA (pd.) Tuesday 6:30pm, Wednesday 6am, Friday 6am & 8:30am, Sunday 9:30am. Yoga is for everybody at Happy Body, 1378 Hendersonville Rd. Call 2775741. Registration suggested $12, details at www. AshevilleHappyBody.com AUtism society screeninG oF elF • SU (12/22), 1pm - Autism Society of North Carolina and Asheville Pizza and Brewing Company will screen a sensory-friendly showing of Elf. All ticket proceeds to benefit ASNC. $3. Info: 236-1547 or ashevillebrewing.com. red cross blood drives Info: redcrosswnc.org or 258-3888. Appointment and ID required for blood drives. • TH (12/19), 7am-6pm - Mission Hospital, 501 Biltmore Ave. Appointments and info: 1-800-REDCROSS. • MO (12/23), 1:30-5:30pm - Care Partners, 68 Sweeten Creek Road. Appointments and info: 2774800 ext. 4744. yoGA For veterAns • MONDAYS, 7-8pm - A yoga class for veterans and their families will be offered at Asheville Yoga Donation Studio, 239 S. Liberty St. All levels. Instructor: Ashley Poole. Free. Info: youryoga.com or 254-0380. Taste of Local Ingles in Hendersonville - Asheville Hwy Friday, Dec 20th 3:30-6pm Say ‘Happy Holidays!’ to some of the local farmers and vendors that supply Ingles, and pick up some treats for your Christmas table: Annie’s Breads of Asheville Carolina Pig Polish of Whittier Buchi Kombucha of Weaverville Empire Distributors of Asheville Henderson’s Produce of Hendersonville Hickory Nut Gap Farms of Fairview My Gluten Free Bread Company of Hendersonville Slawsa of Cramerton NC (recently appeared on ABC’s “SHARK TANK” Roots Hummus of Asheville Sunny Creek Sprouts of Tryon 1800 Four Seasons Blvd. --- 7pm - Grace Covenant Presbyterian Church, 798, 1-Davidson River Presbyterian Church, 249 E. Main St. bAlAnce Point collAborAtive Located at 263 Haywood St. unless otherwise noted. Info: balancepointnc.com or 348-6922. • TUESDAYS, 5:30-6:30pm - New Voice, a support group for eating disorder recovery. Free. Info: balancepointnc.com or 348-6922. debtors AnonymoUs 12-step recovery on issues of underearning, debt and learning to live one's vision in life. Info: 779-0077. • MONDAYS, 7pm - Meets at First Congregational UCC, 20 Oak St., Room 101. Info: debtorsanonymous: 631-434-5294. nAmi sUPPort GroUPs The National Alliance on Mental Illness offers three types of groups to support people living with mental health issues and their families, friends and loved: 474-5120. • TUESDAYS, 7pm - Meets at First Congregational Church, 20 Oak St. Info: 273-1280. Alcoholics AnonymoUs AA, Leah McGrath, RD, LDN Corporate Dietitian, Ingles Markets Follow me on Twitter: Work Phone: 800-334-4936 26 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com ones. Free. Info: namiwnc.org or 505-7353. • WEDNESDAYS, 2pm - Dual Diagnosis Support Group. For individuals with MH/ SA diagnoses. 3 Thurland Ave., off Biltmore Avenue. • 1st SATURDAYS, 10am; 3rd TUESDAYS, 6pm - Family/Caregiver group for people supporting someone experiencing a mental health issue. 356 Biltmore Ave., Suite 315. nArcotics AnonymoUs NA 6th Ave. W., Hendersonville. Enter through side parking lot. Info: 891-8050. overeAters AnonymoUs A fellowship of individuals who are recovering from compulsive overeating. A 12-step program. • THURSDAYS, 6:30pm - Step Study group at the Cox House, 723 N. Grove St., Hendersonville. Info: 329-1637. • THURSDAYS, noon - Biltmore United Methodist Church, 376 Hendersonville Road. Info: 674-2417. • FRIDAYS, 10am- Step Study group at Biltmore United Methodist Church, 376 Hendersonville Road. Info: 277-1975. • SATURDAYS, 9:30am - 424 W. State St., Black Mountain. Open relapse and recovery meeting. Info: 669-0986. • MONDAYS, 6pm - First Congregational UCC, 20 Oak St. Info: (516) 650-5626. • MONDAYS, 6:30pm - Balfour United Methodist Church, 2567 Asheville Highway, Hendersonville. Info: 800-580-4761. • TUESDAYS, 10:30am-noon -. s-Anon • ONGOING - An anonymous 12-step program for those affected by another's sexual behavior. Four meetings available weekly in WNC. Days, times, locations and additional info: 258-5117. smArt recovery A peer support group to help individuals gain independence from all types of addictive behavior (drugs, alcohol, gambling, sex, etc.). • THURSDAYS, 6pm - Grace Episcopal Church, 871 Merrimon Ave. Info: smartrecoveryavl@ gmail.com or 407-0460. • MONDAYS, 6:30pm - An additional group will meet at St. Andrew Celtic Church, 850 Blue Ridge Road, Black Mountain. t.h.e. center For disordered eAtinG 297 Haywood St. Info: thecenternc.org or 3374685. • WEDNESDAYS, 7-8pm - Support group for adults. Free. • 3rd SATURDAYS, 10-11:30am - A support group for family members, caregivers and friends of individuals struggling with eating disorders is held at T.H.E. Center for Disordered Eating, 297 Haywood St. Led by licensed professionals. Free. Info: thecenternc.org or 337-4685. • 1st & 3rd MONDAYS, 5:30-6:30pm - Eating disorder support group for teens ages 15-17. wnc brAin tUmor sUPPort • 3rd THURSDAYS, 6:30-8pm - WNC Brain Tumor Support meets at MAHEC, 121 Hendersonville Road. Info: wncbraintumor.org or 691-2559. wnc brAin tUmor sUPPort • TH (12/19), 6:15-8pm - WNC Brain Tumor Support will hold a Christmas party at MAHEC Biltmore Campus, 121 Hendersonville Road, for adult patients, survivors, families and caregivers. Info: 691-2559 or wncbraintumor.org. more wellness events online Check out the Wellness Calendar online at www. mountainx.com/events for info on events happening after December 26. cAlendAr deAdline The deadline for free and paid listings is 5 p.m. wednesdAy, one week prior to publication. Questions? Call (828)251-1333, ext. 365 FEET HURT? Dr. Daniel Waldman, DPM FACFAC 828-254-5371 Give the Gift of Health! UNLIMITED CLASSES NEW STUDENTS & LOCALS ONLY 10 DAYS $25 GIFT The ORIGINAL Hot Yoga CERTIFICATES AVAILABLE hotyogaasheville.com 828-299-7003 In-network with most insurance plans including Medicare. to make an appointment CALL NOW 877-876-3783 Copyright LiveWin, LLC mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 27 F O O D House party National Gingerbread House Competition has local heart For this year’s space-themed creation, Joyce says they used circle molds to make the planets and cakepop transport- By hayLEy competi- joLLy hoLiDay: Lilli McFerrin’s Mary Poppins-inspired entry, “A Spoonful of Sugar,” won third prize in the teen category in this year’s competition. Photo by Haley Steinhardt tors. 28 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com 18 Church Street | Asheville, NC NEW YEAR’S EVE Prime Rib Dinner JAZZ AND BLUES w/ Kat Williams putting it togEthER: Annie Joyce’s special-education students at Reynolds High constructed their own entry for this year’s National Gingerbread House Competition. Their creation made it into the Top 10 in their category. Photo by Jenni Wiseman CHAMPAGNE TOAST AT MIDNITE $75 FOR THE EVENING DINNER SERVICE 7PM - 9PM MUSIC 9:30PM - MIDNITE hot sake special 1/2 Price Hot Sake Every Sunday & Monday 7 DAYS LUNCH & DINNER 640 MERRIMON AVE. SUITE 205, ASHEVILLE • 828-225-6033 ing the creations is one of the competitors’ biggest challenges. “Travel [with] these gingerbread houses is not easy,” says JohnstonCr cush- ion .” So head on over and marvel at these heartfelt and often astonishingly wellmade. Through Jan.. X 828-348-5327 m ya lt a m o nt .com Join us for dinner featuring NYE menu 5-10:30 pm. DJ RA MAK CHAMPAGNE DANCING 29 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 fooD by Elizabeth Reynolds McGuire fortydaysfika.com Brewing Company Asheville, NC Full bar . Full kitchen Fika Files Tidings of coffee and joy at Filo There is a little rock building on Tunnel Road in East Asheville. To me, it has always been a coollooking structure, but nothing more. Historically, it was a meeting place for the American Legion. However, recently I realized that behind that amazing architectural exterior there’s a cozy, bohemian, Europeaninspired cafe. Honestly, I didn’t know there were funky, cool, possible fika places in East Asheville. Boy, was I wrong. Fika (fee-ka) is Swedish for the idea of sharing a cup of coffee and conversation with another person. It’s the act of slowing down in the midst of the day to simply be — with yourself or with others. Always on the lookout for great new places to sip coffee and meet friends, I was intrigued by Filo (fee-lo). I opened the cafe’s castlelike door and entered into a huge open space filled with a warm, glowing light that made it bright and yet, still very cozy. I am certain that someone heard me breathe a deep sigh of “perrrrrfect.” I don’t think I said that out loud, but I can’t be sure. Maria Papanastasiou has owned and managed Filo for almost eight years. She says that she has tried to create a “comfortable and inviting place” that focuses on petite pastries and savory bites. Of course, coffee, tea, wine and beer accompany her yummy goodies, are also part of Papanastasiou’s vision. An Asheville native and a New York City-educated pastry chef, Papanastasiou was inspired by her family’s Greek roots and her travels in Europe to bring her brand of fika culture to Asheville. She’s always understood the importance of “taking personal time, which gives [her] inspiration and energy” and seeks to spread that focus in her café, she says. Filo fosters these traditionally European values of creating and sustaining friendships while sharing coffee and pastries, which is the heart of fika. Filo is also a cafe where people meet around big, wooden tables Smoke one every morning... EuRopEan fLaVoR: Filo owner Maria Papanastasiou was inspired by her family’s Greek heritage and her travels in Europe to bring her own brand of fika culture to East Asheville. Photo by Elizabeth Reynolds McGuire We do! Lunch. Brunch. Dinner. Service Daily 48 Biltmore Avenue Asheville, NC, 28801 30 DEcEmBER 18 - DEcEmBER 24, 2013 or comfortable, plush chairs to conduct business or make new friends. I met Asheville newcomer Shayla Morrigan, who described it as a great place to “really enjoy the chance to meet with new people in such a comfortable setting.” And James MacKenzie agreed, advising, “If you come by for its design, you’ll definitely stay for its delicious lattes.” Filo’s friendship-nurturing atmosphere comes naturally: The word “filo” is based on the Greek word for “friend,” philos. So, the entire concept of Papanastasiou’s cafe is built around the idea of good food, good coffee and good times with good people. Filo’s European influence and inspira- tion remind us to make time to enjoy life. That’s what fika is all about. Especially during the hustle and bustle of this holiday season, it’s important to slow down just enough to savor life’s moments. There is always time for a cup of coffee.. This holiday season, as you remember that Filo’s name reminds us about friendship, grab a buddy and stop by for a leisurely, friendly fika. You’ll be glad you did. X mountainx.com mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 31 OPEN Christmas Eve! Recommended on TripAdvisor fooD by Michael Franco writermikef@gmail.com Make Your Reservation Open Every Day Lunch 11:30am-3pm Dinner 5pm-9pm Food trucks revisited (828) 645-8866 - 18 North Main St - Weaverville We love Asheville! Looking for the perfect gift? Holiday cakes, pies and cookies Geraldine’s Bakery Made from Scratch 828.252.9330 840 Merrimon Avenue D.o.g.s. LifE: Decrepit Old Geezers Sausages food-truck owner Bill Drake, right, and his wife, Marlene, started their business as a hot dog cart four years ago. Photo by Michael Franco A veteran food-truck owner tells his side of the story Saturday afternoons. Drake and his wife, Marlene, have been running D.O.G.S. for about five years. Mountain Xpress:, “It seems like all the articles about the food-trucks lately have focused on the negativity — how hard it is, how much time it takes and how little money they make. How about an article about the other guys — those of us who are doing well, making money and not working 70 hours a week.” 32 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.commortar. X Best Latin Breakfast in Town Open 7 days for Breakfast, Lunch, Dinner Grove Arcade Suite 139 828-350-1332 FREE Spring or Egg Roll with purchase of dinner entree - Over Local, Craft & Imported Beer Labels Available at Our Downtown Location. 45 S. French Broad 828-255-5228 mountainx.com 407 WE CATER TOO! 828-669-8178 DEcEmBER 18 - DEcEmBER 24, 2013 33 fooD photo by Toni Sherwood Small bites Notes from the Asheville food scene Service induStry Sunday Brunch 12-5 cREpERiE Bouchon’s “sociaL chEEsE pLatE” Want the fun of cooking over a campfire without the smoke, uncomfortable rocks and freezing weather? Raclette could be just the thing. The convivial Swiss dining tradition was recently introduced by Chef Michel Baudoin to the menu at Creperie Bouchon as the restaurant prepares to remain open all winter for the first time. Raclette refers to both a soft, creamy cow’s milk cheese by that name, as well as a meal cooked with that cheese right at the table on heated marble slabs — a practice that has been around for hundreds of years. Legend has it that herdsmen accidentally left cheese on a rock near the campfire, then scraped it off only to discover a delicious treat. The word raclette comes from the French racler, which means “to scrape.” The basic raclette is a hearty combination of cheese and garnishes such as cornichons, cocktail onions and fingerling potatoes served with bread — the perfect accompaniment to a glass of wine. Diners melt the cheese in a mini-skillet placed under the stone and grill the other ingredients on the stone’s surface. The process can get creative with the addition of extras including fresh strawberries, pear and apple, as well as mushrooms, roasted red peppers, roasted chicken breast, spicy andouille sausage, duck sausage, honey-roasted ham and bacon. There is also a fun dessert option: French s’mores with house-made sugar cookies, Nutella and marshmallows. Food and Beverage Manager Bill Cooke says the restaurant is introducing raclette “because it’s a social way of eating.” Chef Baudouin agrees. “Raclette is a great fit because the Creperie is very informal and very convivial,” he says. “It’s sort of a social cheese plate to share where everyone can get involved and create their own small bites.” Another tasty tidbit is that all crepes at Creperie Bouchon are made with gluten-free flour created especially nFL ticket Bloody Mary Bar $3 Mimosas, 5/$12 Oskar Blues Buckets, $1 sliders 80’S night 9-2am 1078 Tunnel Road Asheville, 28805 828-298-8780 Open till 2am EVERY night! moVing on: After Tomato Jam shuts its doors on Dec. 20, Chef Daniel Wright will be looking for new opportunities for putting his award-winning culinary skills to use. New Winter menu featuring housemade charcuterie, locally sourced cheeses, 100% grass fed beef, local cage free eggs, fresh baked bread from Strada Italiano and organic, hormone free chicken. for the restaurant by Moon Rabbit Foods based out of Barnardsville. And there’s great news for diners who love all-you-can-eat mussels at Bouchon: This winter you can also get them at Creperie Bouchon every Monday, Tuesday and Wednesday. Creperie Bouchon is at 62-1/8 N. Lexington Ave. in the courtyard next to Bouchon. It is open 11 a.m.-9:30 p.m. Monday-Saturday, and 11 a.m.5 p.m. Sunday. — Toni Sherwood faREwELL (foR now), tomato jam This Open daily. Breakfast, Lunch, Dinner and Craft Cocktails. Contact 828.239.0186 for more details or visit us on Facebook at 151boutiquebar 34 DEcEmBER 18 - DEcEmBER 24, 2013 Making plans for Christmas? Leave the cooking to us. Open Christmas Day with Special Holiday Menu. mountainx.com. — Toni Sherwood LoVE in a Box Bet. — Mary PembletonX This holiday season, let DOUGH do the cooking for you. 372 Merrimon Avenue • 828.575.9444 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 35 Holi-DIY festive handmade projects from across wnc Compiled by Alli Marshall’t have a soft spot for a candy-covered gingerbread house or a handmade ceramic menorah? Here, Xpress readers offer some of their favorite festive crafts and the memories those bring to mind. MEGAN WATSON etsy.com/shop/TheMiddleSisterCo Natural burlap wreath with a red bow and green spray. MARY BETH HER. CRYSTAL RODRIGUEZ etsy.com/shop/CrystalKnits77 Knitted hat and scarf for a celebratory bottle of wine. 36 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com VIKKI ROGERS pinkhousetreasures.wordpress.com. SARA MARSHALL greenery for sprays. SUE WILLE. mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 37 Holi-DIY greenteasushi.com Open Christmas Day • 12 - 10pm Open New Yearʼs Day • 1pm - 10pm 2 Regent Park Blvd. | 828-252-8300 Like us on facebook.com/greenteasushi CORINA PIA TORII facebook.com/toriistudios. DARLENE HATCHETT etsy.com/shop/hatchettdesigns Christmas miniature in a silver server piece. 38 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com WATER FILTRATION Filter your water so your body doesn’t have to. Home of the Aqua Pro! All sizes, All budgets 115 Elk Mountain Rd. Asheville, NC 255-0772 PHIL CHENEY dynamicartgallerielevel. Never mind the instructions, it’s full-steam ahead from here. Come try a hand-brewed cup of our “Good Food Award” winning coffee El Joaquin 1 pack square • 828-254-0209 Huge Inventory of Bikes, Clothing, Accesories and all you need to Ride Check our Facebook page for the 12 Days of Christmas Promotion! STOCKING BIKES FROM CATE SCALES thegoodsavl.com. ASHEVILLE SYLVA 878 Brevard Rd 828.633.2227 552 W. Main St. 828.586.6925 MOTIONMAKERS.COM mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 39 a&E by Alli Marshall amarshall@mountainx.com Outward bound Rising Appalachia returns to Asheville for a solstice show of a Beat poet father and a flight attendant/fiddle-playing mother. The household was rich in creativity, and Leah says their parents instilled the value of travel “in the old sense, where it was a means of educating yourself on culture.” These days, travel is a constant for the Smiths, though they do have a few places that feel more like home. One is Western North Carolina — they’ll perform a solstice show at The Orange Peel Saturday, Dec. 21, before heading out across the country again for a string of dates. The Peel is a big step up from the group’s burstingat-the-seams BoBo Gallery shows a few years back. Earlier this fall, Rising Appalachia headlined The LEAF’s Lakeside Stage for the first time. That set proved so popular that there was no room under the stage’s giant tent — and the crowd outside stood about 10 deep. Leah says the band has a bigger following in other places: They’ve played FloydFest and on a main stage at a Swedish festival. “We went to Bulgaria and were treated like rock stars,” she recalls. But the sisters kind of grew up at LEAF, where they started out performing on the lawn. To gain recognition at a festival that feels like family means a lot to them. “They’re doing a good job with the model of that festival and all its outreach work,” notes Chloe. Outreach is also important to Rising Appalachia. “My relationship to performance came slowly, out of a bigger desire to do education and front-line work, figuring out how to have an impact in the world,” says Leah. Early on, she questioned the validity of touring as a means of activism, but over time she found that she could extend her influence beyond the stage to “folks who were working on the ground.” When possible, the Smiths make a point of working in schools or prisons on the afternoons before their shows. Often, outreach takes the form of a story circle, “a space for people to have dialogue and feel heard,” says Leah. The band rarely performs in those situations, though. “I do If you made a word cloud from a conversation with sisters Leah and Chloe Smith, “community” would be the most heavily weighted term. Music, art, family and travel are all prominent themes, but it’s community that runs like a thread throughout the Smiths’ lives, and the on- and off-stage work of their band, Rising Appalachia. “What’s powerful about being in parts of Central America, urban Latin America and New Orleans is the relationship to community living,” says Leah. “I come and go, so I’m a nomadic part of the community. But that relationship with building an alternative and tight-knit who Rising Appalachia whERE The Orange Peel, theorangepeel.net whEn Saturday, Dec. 21, 9 p.m. $12 advance/$15 day of show reality, which is not connected to this Western way of living, is what I’m attracted to, culturally. It’s also become the main focal point of our music.” Rising Appalachia, says Leah, is about a relationship to roots music — the kind that’s played on front porches around the world. Used as a storytelling device, it’s a way to hold on to memories and influences as disparate as beat-boxing, AfroCuban percussion, banjo and jazz horns, which all find their way into the sonic brew. What began as a study of how communities choose to live has funneled into “a life mission,” says Leah. Reared on hip-hop, the sisters grew up in Atlanta, the daughters 40 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com HERO HUNTER COMICS •Comics •Graphic Novels •Gift Certificates ( 8 2 8 ) 66 5 -70 0 5 ®, ™ and © 2013 DC Comics. All rights reserved. 1107 Smoky Park Hwy community cEntER: Sisters Leah and Chloe Smith of Rising Appalachia have developed their approach to music, performance and touring to include outreach and connection to the communities they visit. Photo by Melisa Cardona think music is a catharsis,” she explains, “but for me it feels really good to have a relationship first, before we come in as performers, because that can create a barrier.” Chloe adds, “We have a lot of people in our fan or family base who ask us how to strive and be involved, so we’re also trying to figure out that balance.” To connect the dots, Rising Appalachia taps local nonprofits and performers for each tour leg. The band’s current run of shows includes Atlanta-based spoken-word artist Theresa Davis as the opener; she’ll also connect with poets in each town. In Asheville, the Smiths have planned an evening-long journey including an Aztec dance ceremony and local West African-influenced band Mande Foly. Soul Visions, a dub-remix project based on Rising Appalachia’s acoustic material, will close the show. “So it’s kind of like a mini-festival environment,” says Leah. “The audience can be involved with what’s going on in their com- munity, and we can create a container for that to happen.” Figuring out what’s happening in the underbelly of each tour stop, she notes, feeds her inner anthropologist. “It’s like, how deep can we go in 12 hours?” One place where the Smiths have gone deep is New Orleans. Chloe calls it “a soul base and a creative base.” And though the sisters are on the road about 90 percent of the time, whenever they return to the Louisiana city they feel instantly embraced by the network of fellow musicians who also call it home. “Everyone comes to the mountains and hides out. You write and rest and have a retreat space,” says Chloe. “But you roll into New Orleans, and your creative battery is charged. You’re surrounded by art and music, and you’re taken right back into the river of creativity.” X mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 41 a&E by Toni Sherwood toni_sherwood@yahoo.com Hot for the holidays Bombs Away Cabaret’s XXXmas is the emcee. She’s a shady businesswoman and part-time dominatrix. Ophelia Bottome is a fameobsessed, opera-singing, wannabe socialite. Rod O’Steele is a vapid, self-interested, class-conscious exLong Islander. And Iona Traylor is a sweet, folksy, naive West Virginia native. The audience is both spectator and insider, privy to the behind-the-scenes machinations of the cabaret. Joining Bombs Away Cabaret’s colorful cast of characters are several local performers, including bar poet Barbie Dockstader Angell, Andrew Benjamin of experimental/cabaret outfit Hellblinki, dancer Elizabeth Evans and juggling troupe 40 Fingers and a Missing Tooth. The revue will include guest geek-burlesque performer Talia de Neko of Asheville-based FTW Burlesque. Geek-burlesque sprang from the Comic-Con culture and incorporates characters sourced from comic books, anime, film, graphic novels and video games such as Catwoman, Barbarella, Betty Draper and Pikachu, to name a few. The show draws inspiration from more than just the ghosts of Christmas past. Shehan says, “I love the cheesy ’70s Christmas variety shows, especially with really bizarre combinations of celebrities wearing Christmas sweaters, like David Bowie and Bing Crosby.” Performer and publicist Joseph Barcia (Rod O’Steele) adds, “Our show is a cross between that and RuPaul’s Christmas specials in the ’90s.” They describe Bombs Away’s style as contemporary, fun and irreverent adult humor, with roots in vaudeville-style burlesque, spiked with tongue-in-cheek comedic wit. Shehan credits George Burns and Gracie Allen as her biggest vaudeville influences: “I love their quick witty banter and the way they sing and dance as if it’s something you’d normally do while talking.” Barcia is inspired by early Bette Midler but also says, “I think we’re heading more in the John Waters direction, especially with meta-cabaret character Claudette Cleavage.” “People need to laugh, especially around Christmas,” says Bombs Away Cabaret performer and coordinator Amber Shehan, (aka Iona Traylor). The performance troupe’s mission is to restore burlesque to its comedic roots, and the group is constantly striving for a balance between inappropriate and socially acceptable, sexual and sensual. Shehan deadpans, “My mom says burlesque is OK if it’s for charity.” XXXmas: The Naughty List, is just that. The holiday-themed revue, which opens this weekend at the Toy Boat Community Art Space, is both a fundraiser for Bombs Away Cabaret’s upcoming full-length show, planned for this spring, as well as a what Bombs Away Cabaret presents XXXmas: The Naughty List whERE Toy Boat Community Art Space whEn Friday and Saturday, Dec. 20 & 21, at 8 p.m. $12 in advance by credit card or cash only at the door food drive. Audience members are encouraged to bring canned goods to benefit Loving Food Resources, a volunteer-run food pantry that provides basic needs to people living with HIV/AIDS. As for the show, be prepared for satirical spins on traditional holiday songs, including, “I’ll Be Home For Christmas,” “Santa Baby,” “Blue Christmas” and “Diamonds Are A Girl’s Best Friend.” Audience members may recognize the tunes, but the lyrics are sure to surprise. The show revolves around four central characters, all members of a cabaret troupe situated in modernday Asheville. Claudette Cleavage 42 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com Santa BaBy: Bombs Away Cabaret’s holiday-themed revue showcases vaudeville, comedy, “boylesque,” geek-burlesuqe and satirical spins on traditional holiday songs. Photo courtesy of Bombs Away Cabaret. But Barcia insists they will never stoop to being shocking just for the sake of it, aiming for humor created by the disparity between ribald and innocuous personalities. Shehan goes on to say, “We want to be a comedic reflection of characters happening in the world around us.” The troupe uses present-day situations and references in its comedic sketches. Unlike the so-called perfect physiques lauded by the fashion industry, modern burlesque embraces all sizes and shapes. This can be especially empowering in a society obsessed with image. “Anyone in our culture is likely to have body dysmorphia,” Barcia says. When Shehan first joined the group in 2011, she herself felt deficient. But performing burlesque has bolstered her self-esteem. Shehan says, “I’ve gained a lot of confidence, and it’s been a big help having a safe, comfortable place to express myself confidently and sensually, and have positive feedback.” Barcia had a different issue to overcome: “Sometimes people think it’s strange because I’m a male. I have to explain that I don’t do drag. It’s not even androgyny. I’m basically doing the same thing that the girls are doing except it’s a male experience.” He says his performance is in line with the “boylesque” tradition that began in San Francisco, combining the glam of burlesque with a decidedly masculine aesthetic. One of his characters is a male spin on the Jewish American Princess archetype. “Just having a male member makes us edgier,” Barcia says. No pun intended. X mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 43 a&E by Lea McLellan lmclellan@mountainx.com More than just a store DREaming of a gREEn chRistmas: The Urban Gypsy trunk show specializes in upcycled and budget-friendly apparel. Photo courtesy of Elle Erickson Traveling trunk show grows roots in Asheville Looking for a different kind of holiday shopping experience? Sifting through Elle Erickson’s vintage, handmade and generally offbeat clothing and accessories at her traveling trunk show isn’t the kind of retail therapy you can find at the mall. In fact, where the Urban Gypsy trunk show will pop up next has always been something of a sweet surprise for downtown shoppers. But with Erickson making a permanent move from Charlotte to Asheville, customers can expect to see more of her and her eclectic, inexpensive wares. According to Erickson, shopping should be an event. And she knows how to make it the kind of party to which fashion-oriented Ashevilleans want to be invited. “It feels like more than just a store,” says Erickson, who plans to feature Thai massage and tarot card readings at an upcoming show. “I try to make it a fun experience. I’m an extrovert and I love people. … We dance and we laugh and we get silly — we might even have a mimosa and just keep it light.” Erickson lived in Charlotte for 13 years — all the while, she says, she dreamed of relocating to Asheville. She visited to set up shop every six months or so. Now that she is local, she plans to hold a trunk show each month. Erickson says Asheville is the perfect place for her business. “It seems like Asheville kind of gets it more than other cities,” she says. 44 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com “There has to be something kind of funky and different about the item for it to get into the trunk show, and that’s kind of how I feel about the people here. They’re one-of-a-kind, eclectic, funky, artistic people, so they would gravitate towards that.” what Urban Gypsy trunk show urbangypsytrunkshow.com whERE Hyphen Coffeehouse, 81 Patton Ave. whEn Saturday, Dec. 21, 11 a.m.-8 p.m., and Sunday, Dec. 22, 1-6 p.m. Not only is her aesthetic wellsuited to the Asheville customer, but she also says that her trunk shows appeal to eco- and budgetconscious shoppers as well — the average cost of an item is $10. “Everything that I sell is recycled, so there’s kind of a guilt-free aspect of shopping, which is nice,” she says. “And it’s really budget-friendly for people who are artists and need to watch how much they’re spending on their clothes. ... I think it will appeal to the tourists and the locals because it’s different. When I’m out buying, I have a good eye. Nothing is just your basic thing.” And for customers craving locally sourced fashion, Erickson is also looking to buy clients’ used clothes and incorporate more items from local jewelry-makers and artists. Though she’s officially an Ashevillean now, Erickson will maintain a bit of her wanderlusting ways and isn’t planning on signing a lease for a permanent spot. At least, not yet. She says she will “pop around to different places to keep it fresh. … I don’t know about landing somewhere. I haven’t figured that out yet. I’m not sure about the store, but it’s definitely a possibility. The trunk shows are fun, but it’s a lot of work. So maybe I’ll end up on Lexington Avenue before you know it.” X LARGEST SMOKE SHOP ON THE EAST COAST Do o r B in g De a ls St a r t a t 9AM 26 DEC u s te r ALL E- C IGS & E-J U IC ES NOW AVAILABLE ! C h a r g e rs ! A ll St a r t e r K it s ! A ll St y le & C o lo r Ba t t e r ie s ! A ll D if f e re n t St y le s & C o lo r Ta n k s & ! C le a r o m ize rs 15% OFF ALL E-CIGS, E-JUICE &ACCESSORIES Cannot be combined with any other offers O V E R 2 0 0 E-JUI CE FLAVO R S N O W AVA I LA B LE I N A L L S T R E N G T H S : CONTAINS B12, VITAMIN C, ECHINACEA & MELATONIN IN ALL FLAVORS & STRENGTHS NOW SMOKE YOUR FAVORITE FANTASIA HOOKAH ON THE GO, MADE WITH REAL HOOKAH JUICE! IN ALL YOUR FAVORITE FLAVORS ULTIMATE VAPES & E-GO JUICE IN ALL FLAVORS & STRENGTHS Follow Us on: highlife gallery asheville TEX T ‘ Hi g h l i f e ’ t o 411247 for more detai l s highlife_asheville 828-505-1558 • 1067 Patton Ave. Asheville, NC 28806 mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 45 a&E Send your arts news to ae@mountainx.com. S M A R T B E T S by Alli Marshall Floating Action “Proris Dax Riggs “For me, Dax is redefining blues music,” wrote Henry Rollins (he of Black Flag, spoken word and massive biceps). Rollins recommends a musician, it’s probably best not to argue. Dax Riggs performs at Broadway’s on Thursday, Dec. 19, at 9 p.m. $10. avl.mx/03h Project/Object Celebrating the music, life and Dec. 21 birthday of composer/engineer/visionary Frank Zappa, Project/Object puts on a rare show. The lineup includes founder André Cholmondeley (pictured; a recent transplant to Asheville) with a stellar lineup of national and local musicians. Eric Slick (Dr. Dog), Derrick Lee Johnson (Booty Band), Jamar Woofs (The Fritz) and Keith Harry (Deja Fuze) make up the group — the Asheville show will be their first time, collectively, as Project/Object. Guitarist Denny Walley also makes an appearance: He met Zappa when they were both growing up in California and went on to be part of both Zappa’s and Captain Beefheart’s bands. Of playing with the former he told one interviewer, “It was like being taken to college.” Project/Object will be at The Grey Eagle on Sunday, Dec. 22, at 8 p.m. $12/$15. thegreyeagle.com. Watch for a review at mountainx.com. Photo by Robin Gelberg The Double Crown anniversary party” (according to the venue’s Facebook) also meant overcoming some serious bad juju from its predecessor. But the good vibes prevailed, and The Double Crown is celebrating in style. Gospel and R&B group Legendary Singings Stars, formed in the ’60s by the late Tommy Ellison, return (they played the Double Crown’s 2012 launched, and performed again at the bar in July). Plus, food from Mama’s Kitchen. Saturday, Dec. 21, at 10 p.m. $10. thedoublecrown.com. Photo courtesy of The Double Crown 46 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com Used Books, Music and More Johnson City - oak Ridge - asheViLLe gReenViLLe, sC - noRth ChaRLeston, sC Mr. K’s Buy a $25 Gift Card & receive a $5 Gift Card FREE! limited time offer New & Used: Books • CDs • Video Games • Books on CD • DVDs • Vinyl Records Top Dollar Paid for CDs, DVDs & Text Books. We have a large Christian Section! Asheville lArgest Used Bookstore Perfect for holiday gift giving! LOVE YOUR LOCAL advertise@mountainx.com pizza bakers since 1974 50 Broadway • Asheville, NC 236-9800 800 Fairview Rd. River Ridge Shopping Center (Beside A.C. Moore) • Hwy 240 exit #8 Open Mon-Sat 9am-9pm • Sun noon-6pm • Shop till you drop! 299-1145 • The Most Beer on Draft in Asheville 25% Discount for locals 74 TAPS! FULL BAR w/ TUESDAY WEDNESDAY TRIVIA NIGHT Gift Certi cates and Prizes! THURSDAY 15% OFF Lunch for City & County Employees SATURDAY NFL Games All Day! SUNDAY ALL NFL Games! VOTED KID FRIENDL Y ... Please check us out on FACEBOOK for our daily specials. facebook.com/ mellowmushroomasheville mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 47 and other stuff C L U B L A N D Wednesday, Dec. 18 185 King Street Harper & Motor City Josh (blues, rock, funk, soul), 8pm 5 Walnut Wine Bar Hot Point Trio (jazz), 5-7pm Juan Benavides Trio (Latin), 8-10pm Adam Dalton Distillery 3D: Local DJ party (electronic, dance), 9pm Altamont Brewing Company Hank West Soul Party (jazz, funk), 8pm Ben's Tune-Up Karaoke w/ The Diagnostics, 10pm Black Mountain Ale House Bluegrass jam w/ The Deals Blue Mountain Pizza & Brew Pub Open mic w/ Mark Bumgarner, 7-9pm Club Eleven on Grove Studio Zahiya Holiday Dance Show, 7pm Club Hairspray Requests w/ DJ Ace of Spade, 8pm Cork & Keg Irish jam w/ Beanie, Vincent & Jean, 7pm Double Crown DJ Dr. Filth (country), 9pm Iron Horse Station Jesse James (Americana), 6-9pm Isis Restaurant and Music Hall Vinyl night, 9pm Jack of the Wood Pub Old-time jam, 5pm Lexington Ave Brewery (LAB) Eric & Erica (pop, ambient), 9pm Lobster Trap Ben Hovey (dub-jazz, trumpet, electronics), 7pm Odditorium Wizard Skin w/ Roamer X, Alonaluna, Peace Arrow & Bois (experimental), 9pm Olive or Twist Swing dance lesson w/ Bobby Wood, 7-8pm 3 Cool Cats Band (vintage rock and roll), 8-11pm Sly Grog Lounge Open mic, 7pm TallGary's Cantina Open mic & jam, 7pm The Mothlight Torche (stoner pop, rock) w/ Midnight Ghost Train & Skullthunder, 9:30pm The Phoenix Jazz night, 8pm The Social Karaoke, 9:30pm Town Pump Open mic, 9pm Trailhead Restaurant and Bar Open jam, 6pm Tressa's Downtown Jazz and Blues Wednesday night jazz w/ Micah Thomas, Tyler Kittle & Mike Holstein, 8:30pm Vincenzo's Bistro Aaron Luka (piano, vocals), 7pm Jack of the Wood Pub Bluegrass jam, 7pm Lexington Ave Brewery (LAB) Colton DeMonte, Steve Marcinowski, Matt White & Louis Bishop (comedy), 9pm Lobster Trap Hank Bones ("man of 1,000 songs"), 7-9pm Odditorium Lords of Chicken Hill w/ The Bob Band (rock, punk), 9pm Olive or Twist Swing/salsa and bachata dance lessons w/ Randy, 7-8pm DJ Mike Filippone (rock, disco, dance), 8-11pm One Stop Deli & Bar Phish 'n' Chips (Phish covers), 6pm Tauk w/ Captain Midnight Band (rock, fusion), 10pm Pack's Tavern Eric Congdon (blues), 9pm Pisgah Brewing Company Chris Padgett of Stereofidelics (indie), 8pm Purple Onion Cafe Jimmy Landry (country, folk), 7:30pm Root Bar No. 1 Kenny Freeman & the Smokey Gringoes (outlaw country), 9:30pm Scandals Nightclub Dance party, 10pm Drag show, 12:30am Southern Appalachian Brewery Hunnilicious (folk, Americana), 7-9pm Spring Creek Tavern Pierce Edens (roots-rock), 6-9pm TallGary's Cantina Rock & roll showcase, 9:30pm The Mothlight Diarrhea Planet (rock) w/ No Regrets Coyote & Future West, 9:30pm Timo's House Asheville Drum and Bass Collective, 9pm Town Pump Dave Desmelik (singer-songwriter), Westville Pub Lea Renard & Triple Threat (blues), 9:30pm WXYZ Lounge Jamar Woods (soul, funk, piano), 8-10pm Friday, Dec. 20 185 King Street The BattleAxe Band (Americana), 8pm 5 Walnut Wine Bar Jamar Woods Acoustic Band (funk, soul), 10pmmidnight Altamont Brewing Company Stuart McNair (folk, country, Cajun), 9pm Asheville Music Hall Bob Schneider w/ Ruston Kelly (singer-songwriter, rock), 9pm Athena's Club Mark Appleford (singer-songwriter, Americana, blues), 7-10pm DJ, 10pm-2am Grass Monkey, self-described as “Gonzo-mericana,” will play at both Jack of the Wood on Friday, Dec. 20, at 10 p.m., and Jack of Hearts on Saturday, Dec. 21, at 10 p.m. Hailing from Roanoke, Grass Monkey has been described as “punk-grass” and “bluegrass by way of Fear and Loathing” by critics. Blue Mountain Pizza & Brew Pub Acoustic Swing, 7-9pm Club Eleven on Grove DJ Jam (old-school hip-hop, R&B, funk), 9pm Cork & Keg Red Hot Sugar Babies (various jazz styles), 8:30pm Double Crown Greg Cartwright (garage, soul), 11pm French Broad Brewery Tasting Room Buncombe Turnpike (bluegrass), 6pm Grey eagle music hall & tavern Blonde Blue & Friends (blues), 8pm Havana Restaurant Ashley Heath (singer-songwriter), 7pm Highland Brewing Company Chalwa (reggae-rock), 6pm Iron Horse Station Kevin Reese (Americana), 7-10pm Isis Restaurant and Music Hall Town Mountain w/ Larry Keel & Natural Bridge (old-time, acoustic), 9pm Jack of the Wood Pub Grass Monkey (rock, bluegrass), 9pm Jerusalem Garden Middle Eastern music & belly dancing, 7-9:30pm Lexington Ave Brewery (LAB) The Mobros (rock, soul), 9:30pm Monte Vista Hotel Randy Hale (jazz, blues, pop), 6pm Odditorium Nate Hall presents Poison the Snake, American Landscape & Bask (metal), 9pm Olive or Twist 3 Cool Cats Band (vintage rock and roll), 8:3011:30pm One Stop Deli & Bar Free Dead Fridays feat. members of Phuncle Sam, 5-8pm Orange Peel Delbert McClinton (rock, blues) w/ Alyssa Bonagura, 9pm Pack's Tavern DJ Ocelate (pop, dance), 9pm Pisgah Brewing Company Vortex II beer release party w/ The Blood Gypsies (funk), 6:30pm Root Bar No. 1 Darlyne Cain (rock, acoustic), 9:30pm Scandals Nightclub Zumba, 7pm Dance party, 10pm Drag show, 1am Thursday, Dec. 19 185 King Street Jazz night w/ Bill Berg, 8pm 5 Walnut Wine Bar Hank West & The Smokin' Hots (jazz, exotica), 8-10pm Ben's Tune-Up Island dance party w/ DJ Malinalli, 10pm Black Mountain Ale House Lyric (acoustic, soul),. Blue Mountain Pizza & Brew Pub Patrick Fitzsimons (roots), 7-9pm Club Hairspray Karaoke, 8pm Club Remix Reggae dance night, 9pm Cork & Keg Vollie McKenzie & Jack Dillen (Beatles covers, jazz), 8pm Double Crown DJs Devyn & Oakley, 9pm Emerald Lounge Sexy, John Wilkes Boothe and The Black Toothe & Luzius Stone, 8:30pm French Broad Brewery Tasting Room Dave Dribbon (acoustic), 6pm Havana Restaurant Open mic (band provided), 7pm Isis Restaurant and Music Hall An Evening of ThumbPickin' and Jazz, 8pm 48 DECEMBER 18 - DECEMBER 24, 2013 mountainx.com MONDAY-THURSDAY OPEN 4-8 CHALWA Bloody mary Bar Sundays @ noon FRIDAY • DECEMBER 20 SATURDAY • DECEMBER 21 DAVIS ZOLL TRIO pinball, foosball, ping-pong & a kickass jukebox kitchen open until late 504 Haywood Rd. West Asheville • 828-255-1109 “It’s bigger than it looks!” FRI 12/20 SAT 12/21 SUN 12/22 AN EVENING OF THE BLUES w/ Blonde Blue & Friends 8pm • $10/$12 GREG BROWN & BO RAMSEYw/ RB Morris 8pm • $25/$28 PROJECT OBJECT ASHEVILLE w/ Denny Walley 8pm • $12/$15 w/ Drunken Prayer FRI 12/27 SAT 12/28 THE BLUE RAGS 9pm • $10/$12 WHAM BAM BOWIE BAND & THE CHEEKSTERS 9pm • $10/$12 TUE 12/31 W/ YO MAMA’S BIG FAT BOOTY BAND and The Broadcast 9pm • $20/$25 ***BLACKLIGHT THEMED NYE PARTY!*** NEW YEAR’S EVE mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 49 cLuBLanD Send your listings to clubland@mountainx.com. cLuB DiREctoRy Thursday, December 19th soUthern APPAlAchiAn brewery Pleasure Chest (blues, rock, soul), 8-10pm sPrinG creek tAvern Andy Buckner (Southern rock), 8-11pm strAiGhtAwAy cAFe Wilhelm Brothers (folk, Americana), 6pm tAllGAry's cAntinA OverHead (rock), 9:30pm the mothliGht Floating Action (rock, surf, soul, lo-fi) w/ Coconut Cake, 9:30pm timo's hoUse Earthtone SOUNDsystem (house), 9pm town PUmP Linda Mitchell & the Electric Cabaret (blues, jazz), 9pm toy boAt commUnity Art sPAce Bombs Away Cabaret (variety show), 8pm tressA's downtown JAZZ And blUes Early Spotlight w/ Outside Suburbia (blues, alt-rock, indie), 7pm Peggy Ratusz & Daddy Longlegs (blues, jazz, soul), 10pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm westville PUb Comedy open mic, 10pm white horse Michael Jefry Stevens & Wendy Jones (jazz, Christmas show), 8pm wild winG cAFe A Social Function Trio (acoustic), 9:30pm wxyZ loUnGe Ben Hovey (dub-jazz, trumpet, beats), 9-11:30pm Sexy, John Wilkes Boothe and the Black Toothe, Luzius Stone Saturday, December 21st Winter Solstice Celebration w/ Aaron Woody Wood, George Terry & Zealots, A.J. Usher Band & The Emerald Blues Band Tuesday, December 31st w/ Brushfire Stankgrass New Year’s Eve Party sAtUrdAy, dec. 21 185 kinG street Cody Siniard (singer-songwriter), 8pm 5 wAlnUt wine bAr Cheeksters (jazz), 10pm-midnight Asheville mUsic hAll Aligning Minds w/ Futexture & Deloscinari (live audio/ visual), Push/Pull & Numatik (IDM, fusion), 10pm AthenA's clUb Mark Appleford (singer-songwriter, Americana, blues), 7-10pm DJ, 10pm-2am Wed. dec 18 w/ aunt sis,the sweets blAck moUntAin Ale hoUse Cecil Thompkins Band (Americana, bluegrass), 9pm bywAter Solstice jam w/ The Blood Gypsies, Jonathan Scales Orchestra & Brushfire Stankgrass (bluegrass, folk, jazz), 9pm clUb hAirsPrAy DJ Brian Sparxxx, 8pm cork & keG Hotpoint Trio (gypsy jazz), 8:30pm doUble crown Lil Lorruh (50s & 60s R&B, rock 'n' roll), 10pm emerAld loUnGe Aaron Woody Wood, George Terry & Zealots, A.J. Usher Band & Emerald Blues Band (rock, funk, blues), 8:30pm French broAd brewery tAstinG room Peggy Ratusz (jazz), 6pm Grey eAGle mUsic hAll & tAvern Greg Brown & Bo Ramsey (singer-songwriter, folk) w/ RB Morris, 8pm hAvAnA restAUrAnt Mande Foly (African, acoustic), noon Noah Stockdale (guitar, harmonica), 7pm hiGhlAnd brewinG comPAny David Zoll Trio (retro-pop), 6pm iron horse stAtion Mark Bumgarner (Americana), 7-10pm isis restAUrAnt And mUsic hAll Have Yourself a Swinging Little Christmas w/ Russ Wilson (big band), 8pm JAck oF heArts PUb Grass Monkey (rock, bluegrass), 9pm eric & erica THURS. dec 19 backstage • 9:00PM • $5 featuring colton demonte, steve marcinowski, matt “whitey” white, and louis bishop port city comedy tour backstage • 9:00PM •$10 the mobros FRI. dec 20 backstage • 9:30PM • $5 SAT. dec 21 bubonik funk backstage • 9:30PM • $5 w/ dillon & ashe SAT. dec 28 old north state backstage • 9:30PM • $5 backstage • 10:00PM w/ kiernan mcmullen TUeS. dec 31 wham bam bowie band 50 DEcEmBER 18 - DEcEmBER 24, 2013 white horse Bob Margolin (blues), 8pm wild winG cAFe Ryan Perry Duo, 9:30pm wxyZ loUnGe Ritmo Latino w/ DJ Malinalli (Latin DJ, dance), 9pmmidnight TAVERN DOWNTOWN ON THE PARK Eclectic Menu • Over 30 Taps • Patio 13 TV’s • Sports Room • 110” Projector Event Space • Shuffleboard • Darts Open 7 Days 11am - Late Night sUndAy, dec. 22 5 wAlnUt wine bAr Mande Foly (African), 7-9pm Asheville mUsic hAll Sky Walkers w/ BomBassic (electronic, hip-hop), 8pm ben's tUne-UP Vinyl night (open DJ collective) blAck moUntAin Ale hoUse NFL Sunday w/ pre-game brunch at 11:30am, 1pm LIVE MUSIC... NEVER A COVER THUR. 12/19 JAck oF the wood PUb Woody Pines (ragtime), 9pm JerUsAlem GArden Middle Eastern music & belly dancing, 7-9:30pm lexinGton Ave brewery (lAb) Bubonik Funk w/ Dillon & Ashe (soul, rock), 9:30pm lobster trAP Riyen Roots Trio w/ Kenny Dore (blues), 7pm monte vistA hotel Kevin Lorenz (instrumental guitar), 6pm odditoriUm Common Visions, Cube, Rick Weaver & more (punk), 9pm olive or twist DJ (50s-90s dance music), 8:30-11:30pm one stoP deli & bAr Bluegrass Brunch w/ Grits & Soul, 11am orAnGe Peel Rising Appalachia (roots, world), 9pm PAck's tAvern A Social Function (rock, hits), 9pm PisGAh brewinG comPAny CAN'd Aid benefit w/ Sanctum Sully (bluegrass), 8pm PUrPle onion cAFe The Deluge (Americana, soul), 8pm root bAr no. 1 Ten Cent Poetry (indie-folk), 9:30pm scAndAls niGhtclUb Dance party, 10pm Drag show, 12:30am smokey's AFter dArk Karaoke, 10pm soUthern APPAlAchiAn brewery Junction 280 (bluegrass), 8-10pm strAiGhtAwAy cAFe BullFeather (acoustic), 6pm tAllGAry's cAntinA Rory Kelly (rock), 9:30pm the mothliGht Holiday tango dance, 8pm-midnight the sociAl Karaoke, 9:30pm timo's hoUse Skymatic w/ Electrochemical (electronica, funk, rock), 9pm town PUmP Peace Jones (classic rock), 9pm toy boAt commUnity Art sPAce Bombs Away Cabaret (variety show), 8pm tressA's downtown JAZZ And blUes Ruby Mayfield & The Friendship Train (blues), 10pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm westville PUb Dave Dribbon & the Stomping Rain (country, rockabilly, soul), 10pm blUe kUdZU sAke comPAny Karaoke brunch, 1-5pm blUe moUntAin PiZZA & brew PUb Locomotive Pie (blues), 7-9pm clUb hAirsPrAy DJ Ra Mac, 8pm doUble crown Karaoke w/ Tim O, 10:30pm Grey eAGle mUsic hAll & tAvern Project Object Asheville w/ Denny Walley, 8pm isis restAUrAnt And mUsic hAll Sunday jazz showcase, 6pm lobster trAP Leo Johnson (hot club jazz), 7-9pm monte vistA hotel Daniel Keller (jazz), 11am scAndAls niGhtclUb Dance party, 10pm Drag show, 12:30am soUthern APPAlAchiAn brewery Todd Hoke (folk, Americana), 5-7pm tAllGAry's cAntinA Sunday Drum Day, 7pm the sociAl '80s vinyl night, 8pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm white horse Noonday Feast (celtic), 7:30pm Eric Congdon (acoustic blues) FRI. 12/20 DJ OCelate (pop, dance hits) SAT. 12/21 A Social Function (rock, classic hits) Countdown to 2014: New Year’s Eve Bash! 3 Floors, 3 Bars, 2 DJs & Toppers! $14 Bottles of Champagne Featuring DJ Moto in the Century Room! ONLY $10! (cover starting at 8pm) mondAy, dec. 23 5 wAlnUt wine bAr Sufi Brothers (folk), 8-10pm AltAmont brewinG comPAny Old-time jam, 7pm blUe moUntAin PiZZA & brew PUb Patrick Fitzsimons (roots), 7-9pm bywAter Open mic w/ Taylor Martin, 9pm doUble crown Punk 'n' roll w/ DJ Leo Delightful, 9pm emerAld loUnGe Blues jam, 8pm lobster trAP Dana & Susan Robinson (folk), 7-9pm odditoriUm Open dance night, 9pm oskAr blUes brewery Old-time jam, 6-8pm sly GroG loUnGe Trivia night, 7pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm westville PUb Trivia night, 8pm Century Room NYE Buffet from 6:30 - 9pm $60 per person (not including tax & gratuity) Only 100 tickets available! Make Reservations today! Buffet ticket includes the $10 Countdown Bash! 20 S. SPRUCE ST. • 225.6944 PACKSTAVERN.COM mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 51 cLuBLanD Send your listings to clubland@mountainx.com. 12/21 Grass Monkey • 9pm FREE 12/28 Groove 8 • 9pm FREE 12/31 NYE BASH w/ Sons of Ralph • 9pm FREE 1/10 The Riverbank Ramblers • 9pm FREE 1/11 The Bumper Jacksons • 9pm FREE • 9pm FREE 1/18 Jason DeChristofaro Trio 10/25 Guthrie 12/20 Sarah GrassLee Monkey 9pm & Johnny Irion w/ Battlefield • 9pm $10 12/21 Woody Pines 9pm 10/26 Firecracker Jazz Band 12/27 Goove 8 Costume 9pm & HALLOWEEN Party & Contest • 9pm $8 12/28 JAZZ BAND 10/27 FIRECRACKER Vinegar Creek • 9pm FREE 9pm 10/28 Mustard Plug • 9pm $8 12/29 PLEASURE CHESTPants w/ Crazy Tom Banana w/ SHAKE IT LIKE A CAVEMAN 10/29 10pm Singer Songwriters in the Round • 7-9pm FREE w/ Anthony Tripi, Elise Davis 12/31 NYE Bash w/ Sirius. B 9pm Mud Tea • SAT 12/21 W/ Mande Foly,The Human Experience, (Soul Visions)+ Theresa Davis,Aerialists, Activists, and Poets Asheville natives sanctum suLLy describe themselves as “traditionally untraditional.” Strumming out their hybrid of rock and bluegrass, the band will be on stage at the Pisgah Brewing Company in Black Mountain on Saturday, Dec. 21, at 9 p.m. as a part of Oskar Blues Brewing’s CAN’d Aid Foundation benefit for Colorado flood relief. tUesdAy, dec. 24 Asheville mUsic hAll Funk jam, 11pm ben's tUne-UP Dance party w/ DJ Rob, 10pm clUb hAirsPrAy Trivia night, 8pm creekside tAPhoUse Bluegrass jam, 7pm doUble crown Punk 'n' roll w/ DJs Sean and Will, 9pm lobster trAP Jay Brown (Americana, folk), 7-9pm odditoriUm Comedy open mic w/ Tom Peters, 9pm olive or twist 42nd Street Band (jazz), 8-11pm tressA's downtown JAZZ And blUes Lyric (acoustic), 8pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm westville PUb Blues jam, 10pm white horse White Horse Christmas Eve Show w/ Irish music & open mic, 7pm wednesdAy, dec. 25 olive or twist 3 Cool Cats Band (vintage rock 'n' roll), 8-11pm The Orange Peel •$15 Adv. A NEW YEAR’S MASQUERADE Zansa//Disc-oh!//Futexture// Kri//Medisin//Push/Pull//In Plain Sight//Brett Rock// Flypaper// Olof // Bombassic // JWOB // Samuel Paradise // Collective One//Intrinsic thUrsdAy, dec. 26 5 wAlnUt wine bAr Hank West & The Smokin' Hots (jazz, exotica), 8-10pm ben's tUne-UP Island dance party w/ DJ Malinalli, 10pm blAck moUntAin Ale hoUse Lyric (acoustic, soul), 9pm blUe moUntAin PiZZA & brew PUb Locomotive Pie (blues), 7-9pm clUb hAirsPrAy Karaoke, 8pm TUES 12/31 $20 Adv./21+ Activative.com NewEarthMuziq.com find us on facebook: facebook.com/NEMUZIQ 52 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com clUb remix Reggae dance night, 9pm cork & keG Vollie McKenzie & Jack Dillen (Beatles covers, jazz), 8pm doUble crown DJs Devyn & Oakley, 9pm hAvAnA restAUrAnt Open mic (band provided), 7pm lobster trAP Hank Bones ("man of 1,000 songs"), 7-9pm o.henry's/tUG Open mic w/ Jill Siler, 8pm olive or twist Swing, salsa & bachata lessons w/ Randy Basham, 7-8pm DJ Mike Filippone (rock, disco, dance), 8-11pm one stoP deli & bAr Phish 'n' Chips (Phish covers), 6pm PAck's tAvern Jeff Anders & Justin Burrell (rock), 9pm PisGAh brewinG comPAny Dave Zoll Trio (rock, jam), 8pm scAndAls niGhtclUb Dance party, 10pm Drag show, 12:30am tAllGAry's cAntinA Rock & roll showcase, 9:30pm timo's hoUse Asheville Drum and Bass Collective, 9pm town PUmP Doug Neff (guitar), white horse Dance lessons, 6-7:45pm Asheville Tango Orchestra, 8pm @clubname:WXYZ Lounge Shane Perlowin (jazz guitar), 8-10pm JerUsAlem GArden Middle Eastern music & belly dancing, 7-9:30pm lobster trAP King Leo Jazz, 7-9pm monte vistA hotel Randy Hale (jazz, blues, pop), 6pm odditoriUm Jacked Up Joe w/ Amnesis, Twist of Fate (metal), 9pm olive or twist 3 Cool Cats Band (vintage rock 'n' roll), 8:30-11:30pm one stoP deli & bAr Free Dead Fridays feat. members of Phuncle Sam, 5-8pm Pericles w/ Cleofus (electronic), 10pm PAck's tAvern DJ Moto (dance, pop), 9pm PisGAh brewinG comPAny The Mantras ("alt-grass") w/ Brushfire Stankgrass, 9pm root bAr no. 1 Linda Mitchell (blues, jazz), 9:30pm scAndAls niGhtclUb Dance party, 10pm Drag show, 1am sPrinG creek tAvern Mark Bumgarner (Americana), 8-11pm tAllGAry's cAntinA Contagious (rock), 9:30pm timo's hoUse Philo w/ The Professor, DJ Jet (hip-hop), 9pm tressA's downtown JAZZ And blUes Early Spotlight w/ The Lowdown (jazz), 7pm Al Coffee & Da Grind (soul, blues), 10pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm westville PUb Comedy open mic, 10pm wild winG cAFe A Social Function Trio (acoustic), 9:30pm wxyZ loUnGe Ahora Si (salsa), 9-11:30pm 20% OFF of Any One Item MUST PRESENT COUPON. LIMIT ONE PER CUSTOMER. EXP. 1/31/14 Cannot be used on “sale merchandise” Over 40 Entertainers! END OF YEAR “RED A True Gentleman’s Club DOT” BLOW OUT SALE 40% OFF ALL “RED DOT” MERCHANDISE PIECE, GET THE 2ND AT 30% OFF LINGERIE SALE BUY ONE sAtUrdAy, dec. 28 185 kinG street Strung Like a Horse (psychobilly), 8pm 5 wAlnUt wine bAr Juan Benavides Trio (latin, jazz), 10pm-midnight AthenA's clUb Mark Appleford (singer-songwriter, Americana, blues), 7-10pm DJ, 10pm-2am blAck moUntAin Ale hoUse The Low Counts (rock, Americana, blues), 9pm blUe moUntAin PiZZA & brew PUb Mark Bumgarner (folk, Americana), 7-9pm clUb hAirsPrAy DJ Brian Sparxxx, 8pm cork & keG Zydeco Yayas (zydeco), 8:30pm doUble crown Lil Lorruh (50s & 60s R&B, rock 'n' roll), 10pm French broAd brewery tAstinG room Dave Desmelik (singer-songwriter), 6pm Grey eAGle mUsic hAll & tAvern Wham Bam Bowie Band (David Bowie tribute) w/ The Cheeksters, 9pm hAvAnA restAUrAnt Mande Foly (African, acoustic), noon Noah Stockdale (guitar, harmonica), 7pm iron horse stAtion Barb Turner (R&B), 7-10pm JAck oF the wood PUb Firecracker Jazz Band, 9pm JerUsAlem GArden Middle Eastern music & belly dancing, 7-9:30pm lexinGton Ave brewery (lAb) Old North State w/ Kiernan McMullen (bluegrass, Americana), 9:30pm FridAy, dec. 27 185 kinG street Hometown Holiday Jam w/ Jeff Sipe, Mike Guggino, Mike Ashworth, Corey Bullman & more, 8pm 5 wAlnUt wine bAr The Lions Quartet (jazz), 10pm-midnight AthenA's clUb Mark Appleford (singer-songwriter, Americana, blues), 7-10pm DJ, 10pm-2am blUe moUntAin PiZZA & brew PUb Acoustic Swing, 7-9pm clUb eleven on Grove Salsa night, 10pm cork & keG One Leg Up (gypsy jazz, latin, swing), 8:30pm doUble crown Greg Cartwright (garage, soul), 11pm French broAd brewery tAstinG room Matt Walsh (blues, rock), 6pm Grey eAGle mUsic hAll & tAvern The Blue Rags ("rag-n-roll"), 9pm hAvAnA restAUrAnt Ashley Heath (singer-songwriter), 7pm iron horse stAtion Dana & Susan Robinson (folk), 7-10pm isis restAUrAnt And mUsic hAll Christmas in Scotland w/Jamie Laval (music, stories), 8pm JAck oF the wood PUb Groove 8, 9pm 40% OFF ALL BOOTY PARLOR & LELO PRODUCTS 30% OFF ALL COOCHY SHAVING PRODUCTS BACHELOR PARTY & BIRTHDAY PARTY SPECIALS Mon – Thurs 6:30pm–2am | Fri – Sat 6:30pm–3am GREAT DRINK SPECIALS EVERY NIGHT EVERY UFC FIGHT Where Adult Dreams Come True SUN-THUR 8 AM - MIDNIGHT, FRI SAT 8 AM - 3 AM • • OPEN 7 DAYS • • (828) 684-8250 ½ OFF COVER CHARGE DOES NOT INCLUDE UFC NIGHTS BRING THIS AD IN FOR HOLIDAY HOURS Christmas Eve 8am—9pm Christmas Day 3 pm—12 midnight 2334 Hendersonville Rd. (S. Asheville/Arden) mountainx.com 520 Swannanoa River Rd • Asheville (828) 298-1400 • TheTreasureClub.com facebook.com/thetreasureclub DEcEmBER 18 - DEcEmBER 24, 2013 53 cLuBLanD Send your listings to clubland@mountainx.com. 1 2am Dinner Menu till 1 0pm Late Night Menu till Tues-Sun 5pm–1 2am COMING SOON Thur AN EVENING OF THUMB PICKIN’ AND JAZZ 8pm • $8/$10 1 2/19 Fri TOWN MOUNTAIN W/ LARRY KEEL AND NATURAL BRIDGE 1 2/2 0 9pm • $20/$20 Sat RUSS WILSON ‘HAVE YOURSELF A SWINGING LITTLE 1 2/21 CHRISTMAS 8pm • $15/$20 Full Bar Alternative-punk band thE LoRDs of chickEn hiLL will bring “fun, upbeat, energetic and surprising” notes to Asheville’s Odditorium along with “environmental peace punkers” The Asheville BOB Band on Thursday, Dec. 19, at 8 p.m. Fri 1 2/27 $24/$28 • Student $12 JAMIE LAVAL’S CHRISTMAS IN SCOTLAND 8pm lobster trAP Sean Mason Jazz, 7-9pm monte vistA hotel Laura Thurston (acoustic, folk-grass), 6pm odditoriUm Jenny Lou's Sweet 16 Pinup Show w/ The Go Devils (rock), 9pm olive or twist WestSound Band (rock, Motown), 8:30-11:30pm one stoP deli & bAr Bluegrass Brunch w/ Grits & Soul, 11am Daryl Hance w/ Woody Wood (Southern rock, blues, funk), 10pm scAndAls niGhtclUb Dance party, 10pm Drag show, 12:30am smokey's AFter dArk Karaoke, 10pm sPrinG creek tAvern Natty Love Joys (reggae), 8-11pm tAllGAry's cAntinA Unit 50 (rock), 9:30pm the sociAl Karaoke, 9:30pm timo's hoUse Subterranean Shakedown: Ho-Tron w/ Soundpimp, D:Raf, Selector Cleofus (bass), 9pm tressA's downtown JAZZ And blUes Bayou Diesel (Cajun, zydeco), 10pm vincenZo's bistro Steve Whiddon (old-time piano, vocals), 5:30pm westville PUb Wilhelm Brothers (folk rock), 10pm wild winG cAFe Sarah Tucker Duo, 9:30pm wxyZ loUnGe Vinyl Time Travelers (DJ duo), 9-11:30pm Sat JIM ARRENDELL DANCE PARTY 9pm • $5 1 2/28 Tue NEW YEAR’S EVE CELEBRATION W/ CRY BABY 1 2/31 AND STEPHANIESID $50 Dinner and Show; $15 show only Every Sunday JAZZ SHOWCASE 6pm - 11pm • $5 Every Tuesday BLUEGRASS SESSIONS 7:30pm - midnite 7 43 HAYWOOD RD • 828-575-2737 • ISISASHEVILLE.COM oskAr blUes brewery Riyen Roots Duo w/ Kenny Dore (blues), 7pm PAck's tAvern Aaron Luka (piano, rock, hits), 9pm PisGAh brewinG comPAny Asheville Horns (funk, jazz-fusion), 9pm PUrPle onion cAFe Clay Ross (acoustic), 8pm root bAr no. 1 Thicket (prog-rock), 9:30pm mountain xpress FREE EVERY WEDNESDAY 54 DEcEmBER 18 - DEcEmBER 24, American Hustle HHHHS FRiDay, DEcEmBER 20 tuEsDay, DEcEmBER 24 Due to possible scheduling changes, moviegoers may want to confirm showtimes with theaters. DiREctoR: David O. Russell PLayERs: Christian Bale, Bradley Cooper, Amy Adams, Jennifer Lawrence, Jeremy Renner cHEERFuLLy amoRaL somEtimEs Fact-BasED comEDy DRama RatED R tHE stoRy: Vaguely fact-based (Abscam) comedy drama about notvery-bright people trying to out-con each other. tHE LoWDoWn: Funny, cynical and even a little demented, David O. Russell’s latest boasts incredible turns from its high-powered cast, a genuine sense of the late 1970s and a pop soundtrack to die for. Asheville PizzA & Brewing Co. (254-1281) Please call the info line for updated showtimes. elf (Pg) 1:00, 4:00 (Free admission) Cloudy with a Chance of Meatballs 2 (Pg) 7:00 last vegas(Pg-13) 10:00 CArMike CineMA 10 (298-4452) no shows after 7:30 p.m. Christmas eve American hustle (r) 12:45, 4:05, 7:05, 10:10 The hobbit: The Desolation of smaug 3D (Pg-13) 12:00 (Fri-Sat), 3:30, 7:00, 10:30 The hobbit: The Desolation of smaug 2D (Pg-13) 11:30,1:30, 3:00, 5:00, 6:30, 8:30, 10:00 The hunger games: Catching Fire (Pg-13) 1:05, 1:45, 2:15, 4:10, 4:50, 5:30, 8:15, 8:45, 10:20 saving Mr. Banks (Pg-13) 1:00, 4:00, 6:50, 9:45 Thor: The Dark world 2D (Pg-13) 1:20, 4:20, 7:20, 9:55 walking with Dinosaurs 3D (Pg) 4:15, 9:15 walking with Dinosaurs 2D (Pg) 11:45, 2:00, 6:55 CArolinA CineMAs (274-9500) note: Theaters usually close early on Christmas eve American hustle (r) 11:30, 2:15, 5:30, 8:00, 10:00, 10:30 Anchorman 2: The legend Continues (Pg-13) 11:00, 12:00, 1:35, 2:15, 4:15, 5:30, 7:00, 8:15, 9:35, 10:50 The Book Thief (Pg-13) 10:30. 1:15, 4:00, 9:45 Dallas Buyers Club (r) 11:40, 2:15, 4:45, 7:15, 9:50 Frozen 2D (Pg) 10:45, 1:00, 3:30, 6:00, 8:30 The hobbit: The Desolation of smaug 3D (Pg-13) 12:30, 6:00, 9:30 The hobbit: The Desolation of smaug 2D (Pg-13) 11:20, 3:30, 7:00, 10:20 The hunger games: Catching Fire (Pg-13) 10:45, 1:45, 4:45, 7:45 nebraska (r) 10:30, 1:00, 3:45, 6:15, 9:00 Philomena (Pg-13) 11:15, 1:30, 3:45, 6:00, 8:15 saving Mr. Banks (Pg-13) 11:30, 2:15, 5:00, 7:30, 10:00 Tyler Perry’s A Madea Christmas (Pg-13) 11:00, 1:15, 3:30, 5:50, 8:10 walking with Dinosaurs 3D (Pg) 10:30, 4:00 walking with Dinosaurs 2D (Pg) 2:00, 2:30, 5:00, 6:00, 8:00 CineBArre (665-7776) Co-eD CineMA BrevArD (883-2200) The hobbit: The Desolation of smaug (Pg-13) 12:00, 4:00, 8:00 ePiC oF henDersonville (693-1146) Fine ArTs TheATre (232-1536) Dallas Buyers Club (r) 7:20, Late show Fri-Sat 9:50 nebraska (r) 1:00, 4:00, 7:00, Late show Fri-Sat 9:30 Philomena (Pg-13) 1:20, 4:20 FlATroCk CineMA (697-2463) Philomena (Pg-13) 1:00 (no 1:00 show Wed). 4:00, 7:00 (no 7:00 show Tue) regAl BilTMore grAnDe sTADiuM 15 (6841298) uniTeD ArTisTs BeAuCATCher (298-1234) amy aDams and cHRistian BaLE star in David O. Russell’s cheekily amoral dark comedy American Hustle.ashedup Jennifer Lawrence), who is both magnificently dumb and venal (a dangerous combination). For that matter, the FBI itself is pretty bozoridden. Seriously, what rational group would try to palm off Michael Peña as an Arab sheik? Why would they listen to Richie’s hare-brained scheme in the first place? Even Richie’s relatively rational imme- mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 55 Carpentry by Lucy • Insured • Over 30 Years Experience • AGC Certified Master Residential Carpenter • NC Licensed Journeyman Carpenter • Residential and Commercial Remodeling • Interior Painting moViEs by Ken Hanke & Justin Souther contact xpressmovies@aol.com HHHHH = max rating. reviewed by Justin Souther Playing at Carmike 10, Carolina Cinemas, Co-ed of Brevard, Epic of Hendersonville, Regal Biltmore Grande. 658-2228 diate,” David Bowie’s “Jean Genie,” Paul McCartney’s “Live and Let Die” and even the Bee Gees’ “How Can You Mend a Broken Heart.” Rarely has rampant duplicity and stupidity sounded so good. Rated R for pervasive language, some sexual content and brief violence. reviewed by Ken Hanke Starts Friday at Carolina Cinemas and other as yet undetermined area theaters. The Hobbit: Desolation of Smaug HHH DiREctoR: Peter Jackson pLayERs: Martin Freeman, Ian McKellen, Richard Armitage, Ken Stott, Luke Evans fantasy aDVEntuRE RatED pg-13 thE stoRy: Hobbit Bilbo Baggins and his dwarf companions travel through Middle Earth to breach the lair of a deadly dragon. thE LowDown: Yet another overlong Tolkien adaptation, this one suffers from a sense of corner-cutting and a lack emotional center or any real dramatic arc. Offer expires 12/31/13 lessthan Nebraska HHHHS DiREctoR: Alexander Payne pLayERs: Bruce Dern, Will Forte, June Squibb, Bob Odenkirk, Stacy Keach. Mary Louise Wilson, Rance Howard DRama comEDy RatED R thE stoRy: A delusional old man insists on traveling to Lincoln, Neb., to claim his “winnings” in a contest he hasn’t actually won. thE LowDown: A sometimes unpleasant look at small-town life that’s nicely balanced by a warmly human — and sometimes very funny — take on family relations and how little we know of each other. Another awards-season keeper.. 56 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com ® On the surface, the film is all about Woody Grant (Bruce Dern), an elderly, not-always-genial drunk whose grasp on reality is slipping. He has received one of those comelively, espe- cially Dern’s performance as the befuddled (but not entirely gone) Woody. You should see this one. Rated R for some language. reviewed by Ken Hanke Starts Friday at Carolina Cinemas and Fine Arts Theatre. staRting wEDnEsDay Anchorman 2: The Legend Continues Three terrific movies hit town on Friday, so the Wednesday opening of this sequel to the unaccountably popular 2004 Anchorman: The Legend of Ron Burgundy will give you time to get it out of your system. This is assuming you haven’t reached Burgundy burnout from Dodge commercials and the character showing up on what passes for real newscasts. You already know if this Will Ferrell comedy is in your future. (pg-13) ALL Sunday Shows $1 Wednesdays ALL Tuesday Shows $2 Saving Mr. Banks HHHHS DiREctoR: John Lee Hancock (The Blind Side) pLayERs: Emma Thompson, Tom Hanks, Colin Farrell, Ruth Wilson, Paul Giamatti, Bradley Whitford, B.J. Novak, Jason Schwartzman fact-BasED comEDy DRama RatED pg-13 thE stoRy: Highly colored version of Disney getting the rights to make Mary Poppins. thE LowDown: Realistic? Hardly. Factual? Only in its barest outline. First class entertainment? Oh, my, yes. And Emma Thompson is superb. staRting fRiDay Every Mon-Thu ALL Shows $1 After 9pm Saturday Morning Shows ONLY $1 $2 domestic drafts College Night Walking with Dinosaurs There are two — equally overemphatic — trailers for Walking with Dinosaurs. One suggests a semi-serious computer-animated attempt to “document” the life of dinosaurs. The other looks like the usual anthropomorphic kiddie-movie with an empowerment message — only with talking dinosaurs. My guess is that the second interpretation is nearer to the mark. Check out the array of voice actors like Karl Urban, John Leguizamo and Justin Long, not to mention dinosaurs named Ricky, Uncle Zack, Alex and Patchi. You have been warned. (pg) Sat & Sun - Brunch Menu for all shows before 12pm Movie Line 828-665-7776 Biltmore Square - 800 Brevard Rd Asheville, NC 28808 cinebarre.com I genuinely resent Saving Mr. Banks. If ever there was a movie I was primed to dislike, this was it. I am not a fan of Walt Disney and am completely resistant to his Magic Kingdom. I am not especially keen on the 1964 film, Mary Poppins. (In some regards, that’s an understatement.) I have not been impressed by John Lee Hancock’s previous movies. And the whole thing just looked like saccharine sweetened, treacly rubbish. What I resent, however, is how very much I liked Saving Mr. Banks. I spent the entire time knowing I was being lied to — or at least being fed an incredibly glossy and highly fictionalized story that never let pesky facts get in the way of its agenda. I never for one moment felt like I was watching anything other than Tom Hanks with a mustache playing a cozy, fantasy Uncle Walt, or that Emma Thompson was really anyone other than Emma Thompson playing a crowd-pleasing construct of author P.L. Travers. Despite it Saving Mr. Banks See review in “Cranky Hanke” American Hustle See review in “Cranky Hanke” Join us for dinner Sun-Thur for our new 5 course tasting menus Nebraska See review in “Cranky Hanke” $33 Community Screenings Jimmy stewArt Film series All films are shown at Pack Memorial Library, 67 Haywood St. All events are free unless otherwise noted. Info: 250-4700. • TU (12/24), 3pm - It’s a Wonderful Life modestonc.com Grove Arcade 828.225.4133 DEcEmBER 18 - DEcEmBER 24, 2013 57 mountainx.com moViEs mike miller, reaLtor® asheville native call me, you’ll like mike! 828-712-9052 mmiller@townandmountain.com all — even while cringing at some of those Sherman brothers songs and Dick Van Dyke’s faux-Cockney end-of-the-pier foolishness in clips from Mary Poppins — I was largely enchanted by what took place on-screen. It takes a harder heart than mine to resist this. In order to enjoy this movie, it’s necessary to realize only that the principal characters did indeed exist. Walt Disney did indeed woo P.L. Travers for years to be allowed to turn her Mary Poppins into a film. It is true that she resisted because she didn’t want a cartoon, she didn’t want songs and she didn’t want schmaltz. It is also true that she ultimately gave in. Beyond that, you just have to go with it — even with full knowledge of the fact that she did (in part) end up with some animation, a full serving of songs and a large dose of schmaltz. There’s more than a little schmaltz — oh, hell, there’s lots of schmaltz — in this fantasy version of the events, but little bits of truth are wedged in around the edges. Not the least of those nuggets of truth is that the Disney film — however you feel about it — opened up her books to a new and wider readership. That the film also at least attempts to get at the core of Travers’ feelings about the Mary Poppins character is admirable, if not entirely successful. That it doesn’t present the whole story of P.L. Travers is no great sin. The film is about the making of Mary Poppins. It is not a biopic on Travers. Saving Mr. Banks is a confection and should be viewed as such. In that regard, it works admirably, which is all you can reasonably ask. It is beautifully cast. Emma Thompson is wonderful. Without her performance, the film would be unthinkable. Tom Hanks is ... well, Tom Hanks, but he does pull off the public Disney. (The film only hints at the private figure with the secret smoking, the “pre-signed” autographs of Walt’s art-departmentdesigned signature and the undercurrent of steely determination.) Colin Farrell, who plays Travers’ father in cross-cut flashbacks, is very good — as are the way the flashbacks play against the contemporary (1961) scenes. (One scene, in fact, verges on brilliant.) Bradley Whitford as beleaguered screenwriter Don DaGradi, and B.J. Novak and Jason Schwartzman as the similarly browbeaten songwriting Sherman brothers are exceptionally good, while Paul Giamatti 58 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com manages what should have been an impossibly gooey role with aplomb. The scenes of Travers locking horns with the Disney creative staff are frankly delightful — and the “Let’s Go Fly a Kite” segment is thoroughly charming, even if you’re resistant to the song itself. I make no apologies for falling for this obviously spruced up, sanitized and, yes, Disneyfied version of events. It’s simply splendid entertainment. Letting yourself be put off by its loose depiction of the facts is cutting yourself off from a very pleasant moviegoing experience. (And let’s be honest, not many of this year’s holiday offerings are necessarily pleasant — regardless of how good they are.) If it’s only true in the broadest strokes, so what? Anyone who goes to the movies expecting a history lesson is already in trouble. As a story of the creative experience — no matter how candycoated — it’s pretty darn good. Rated PG-13 for thematic elements including some unsettling images. reviewed by Ken Hanke Starts Friday at Carolina Cinemas and other as yet undetermined area theaters. Tyler Perry’s A Madea Christmas H DiREctoR: Tyler Perry pLayERs: Tyler Perry, Anna Maria Horsford, Tika Sumpter, Eric Lively, Larry the Cable Guy, Kathy Najimy aLLEgED comEDy RatED pg-13 thE stoRy: Madea goes to rural Alabama with her friend Eileen for Christmas. Supposedly funny things happen. thE LowDown: Thoroughly dispiriting and often just meanspirited Madea film represents another step back for Tyler Perry. This will not keep it from making a healthy profit. The hightoned kneeslapping. reviewed by Ken Hanke Playing at Carolina Cinemas, Epic of Hendersonville, Regal Biltmore Grande, United Artists Beaucatcher. story about a fanciful little girl in an isolated Spanish town in 1940, who is deeply affected by seeing the 1931 Frankenstein — to the degree that she believes that a Loyalist soldier hiding in a barn is the Monster. Classic World Cinema by Courtyard Gallery will present The Spirit of the Beehive Friday, Dec. 20, at 8 p.m. at Phil Mechanic Studios, 109 Roberts St., River Arts District (upstairs in the Railroad Library). Info: 273-3332,. The Sentinel HHHH hoRRoR Rated R Critically savaged at spEciaL scREEnings The Spirit of the Beehive HHHHS aLLEgoRicaL DRama Rated NR World Cinema closes out 2013 (they return on Jan. 10) with an encore screening of Victor Erice’s acclaimed The Spirit of the Beehive (1973), a the time of its release in 1977 for being excessively gory and in shockingly bad taste, Michael Winner’s The Sentinel has managed to become, well, almost respectable in the intervening years. Almost. It no longer seems that excessive (which may not be a good thing), even though its central premise of the entrance to hell being in a Brooklyn apartment building is still pretty silly. Now, its glossy professionalism and its undeniably creepy atmosphere are what stand out. No matter how dumb its premise seems, while the movie’s on the screen, it’s deliciously unsettling. The Thursday Horror Picture Show will screen The Sentinel Thursday, Dec. 19, at 8 p.m. in the Cinema Lounge at The Carolina Asheville and will be hosted by Xpress movie critics Ken Hanke and Justin Souther. mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 59 PRESENTS Your guide to area restaurants & bars NEW GUIDE COMING IN MAY! Contact advertise@mountainx.com for details! 60 DEcEmBER 18 - DEcEmBER 24, 2013 mountainx.com M A R K E T P L A C E REaL EstatE | REntaLs | RoommatEs | sERVicEs | joBs | announcEmEnts | minD, BoDy, spiRit cLassEs & woRkshops |musicians’ sERVicEs | pEts | automotiVE | xchangE | aDuLt contact Julie Durham-Defee, julie.durham-defee@meridianbhs.org • For further information and to complete an application, visit our website: www. meridianbhs.org/open-positions.html CSaC COuNSELOr • maLE therAPist Established Counseling Center looking for a male therapist. Must have CSAC credentials. Prefer someone with Substance Abuse work background. Should be familiar or have worked with Domestic Violence Abuser programs in past. Our center runs the 26 week Domestic Violence Abuser program and we're seeking a male counselor to help run our Saturday group. Additional Substance Abuse contract work available. Please contact Colleen directly at The Relationship Center, (828) 388-0011. FAmily PreservAtion services The Hendersonville office of Family Preservation Services of NC is experiencing significant growth. We have employment opportunities in the following positions: licensed outpatient therapist; community support team lead therapist; community support team QP; and day treatment QP. Please send your resume to dreynolds AFFordAble hoUsinG ProGrAm sPeciAlist Mountain Housing Opportunities is seeking a part-time program specialist. Responsibilities include recruiting low-income families for our Self-Help Homeownership Program through a variety of marketing and outreach efforts; assisting families in preparing loan applications; verifying employment, income, credit, and debt. Excellent writing, phone, computer and interpersonal skills a must. Bilingual in English and Spanish a plus. EOE. Salary based on experience. • Send cover letter and resume with references to: Joe Quinlan, Self-Help Program Manager, Mountain Housing Opportunities, 64 Clingman Ave., Suite 101, Asheville, NC 28801. sUPerFUnd site technicAl Advisor POWER Action Group seeks a technical advisor to provide review and analysis of remedial action at the CTS of Asheville site. Superfund experience required. For full job description visit Want to advertise in Marketplace? 828-251-1333 x111 tnavaille@mountainx.com • mountainx.com/classifieds REaL EstatE reAl estAte homes For sAle historic Grovemont! Beautifully maintained Greystone bungalow. Tons of character, built-ins, views, 3 Fireplaces, luxurious red oak flooring. Two car garage and detached workshop. Beautifully landscaped level yard. $249,900. Cornerstone Real Estate 779-2222 828-779-2222 NEar uNCa • NOrtH aSHEville AreA 2BR, 1BA, w\ new bath remodel, small private porch/yard, W/D hookup. $675/month includes water/ sewer. Plenty of parking! 1 cat ok w/fee. Year's lease, security deposit, credit check and references required. For appt: Graham Investments: 253-6800. north Asheville 3BR/1BA townhouse style apt with new floors, one mile from downtown on the busline, no pets. $745/ month. 828-252-4334. north Asheville Townhouse style apartment: 2BR, 1BA for $645/month. Very nice, all new floors. On the bus line, only 1 mile from downtown Asheville. • No pets. 828-2524334. mobile homes For rent west Asheville West Asheville 2BR/2BA mobile home with WD connections, 3-4 miles from downtown Asheville on the busline, very nice. $595/month. 828-252-4334. joBs skilled lAbor/ trAdes medicAl/ heAlth cAre ccwnc.org and note which region you are interested in "CM-South" or “CM-Central” in the subject field. hUmAn services roommAtes roommAtes All AreAs - roommAtes. com Browse hundreds of online listings with photos and maps. Find your roommate with a click of the mouse! Visit:. (AAN CAN) Golden Girl seeks roommAte Beautiful furnished bedroom with private remodeled bathroom.• Looking for mature female roommate in exchange for minimum house keeping. • Share food and utilities. • Located in Hendersonville, off Spartanburg Hwy. • Contact 828-692-3024. we Are hirinG! Full-time factory workers. Join a team that encompasses a positive atmosphere and good work ethic! Call us at (828) 254-3934 or lAnd For sAle 1.5 Acre lot Adjacent to Reems Creek Golf Course in Weaverville, zoned R-2 for Single family or Duplex villa, utilities. Owner financing: $67,000. 813-949-7944. wcfunding@ tampabay.rr.com 3.86 Acres Gently rolling, mostly wooded, long range views, water and electricity at adjacent property. Candler. $165,000. Call Terry 828-2165101. twp@beverly-hanks.com AdministrAtive/ oFFice two Positions At cArolinA moUntAin lAnd conservAncy Carolina Mountain Land Conservancy is seeking applications for two full-time positions: Finance Director and Donor Relations & Events Manager. Full position descriptions and application instructions are available at. homes For rent Asheville eAst-dUPlexHalf house close in. 3BR, 2BA, hardwood floors, fireplace, dishwasher, WD. Woods and trails. No pets/smoking. $825/month, plus utilities. 828-273-6700. sAles/ mArketinG exPerienced inside sAles We are looking for a full-time experienced Inside Sales employee to join our team. Candidate will be responsible for order entry, customer service, and increasing sales revenue by anticipating customer needs and suggesting new products/ up-selling. Our business is fast paced, so the ideal candidate must be very organized and have strong phone and computer skills. We are looking for someone who is self motivated, positive, focused, reliable and detail oriented. Previous sales experience is preferred. Benefits include competitive pay with commission incentives, comfortable atmosphere w/ casual dress, holiday and vacation pay, and great office hours. Interested parties please fax or email resume and cover letter, Attn: Jacqui fax# 828-236-2658 or email: Jacqui@afgdistribution.com oUtwArd boUnd Admissions Advisor Outward Bound has openings for seasonal National Admissions Advisors from January through July 2014. Accepting cover letters and resumes now through December 20, 2013. Contact Ed Parker at eparker@outwardbound.org. eparker@outwardbound.org. commerciAl ProPerty oFFice sUites Downtown Asheville. 1-5 office suites from 490 sqft to 3,200 sqft. Modern finishes, elevator, central air. Affordable, full service rates. G/M Property Group 828-2814024. jmenk@gmproperty. com emPloyment GenerAl wArehoUse worker needed Golden Needle Acupuncture, Herbal and Medical Supply is seeking someone to work in our warehouse/shipping/receiving department. The applicant must be selfdirected and able to work with a high degree of accuracy and attention to detail. In addition, applicant must have computer skills. Knowledge of natural products and healing is preferred. Detailed job description is as follows: Assist in unpacking and checking in daily shipments, placement of product in designated areas of warehouse, keeping warehouse neat and orderly, insuring the rotation of stock, labeling and organizing samples and brochures for distribution to customers and prospects, organizing catalog bulk mailings, pulling orders from pick sheets, shipping orders using UPS worldship and priority mail. barry@goldenneedleonline.com (2) rn cAre mAnAGers (temPorAry) Buncombe County Region. Community Care of Western North Carolina is seeking to fill 2 Care Manager positions in the Central Region (Buncombe County). These are temporary positions where funding is available until the end of the fiscal year (6/30/14). The ideal candidate has 2+ years of Care Management experience; the position(s) requires an RN. • If interested, please send resumes to HR@ccwnc.org and note job code "CM-Temp" in the subject field. Added benefit: Please keep in mind that if a "regular" (non-temporary) CM position becomes available due to natural attrition, we will consider the Temp Care Managers before outside candidates, since they will have already been trained and more familiar with CCWNC. Pt or Prn PhysiciAn's AssistAnt or FAmily nUrse PrActitioner needed to Join oUr teAm Mountain Health Solutions-Asheville, a member of CRC Health Group and CARF accredited is an outpatient program specializing in the treatment of opiate dependence. We are currently seeking a PRN or PT t PA or FNP to conduct routine annual physicals for program patients. contact krubendall@crchealth.com or 828-2256050 ext 120 For sAle by owner tiny treetoP home! Hardwood floors, sunroom, skylight, garage. 2BR, 1BA. Large lot with mountain view. Shopping, entertainment nearby. $124,900. Call (865) 898-4017. mo55@juno.com mobile homes For sAle $43,000 • WESt aSHEVILLE convenience! 2009 Singlewide, Ideal Lease-Purchase Option, affordable payments! Move-in ready, 2 beds, 1 bath on .21 acre. 828-423-1349, Vickie Regala, vista real estate. eAst Asheville beverly hills home For rent 3 Bedroom / 1 Bathroom home with bonus sun room (200-plus sq. ft.). Hardwood floors throughout. Large wooden fenced backyard. 1 car attached automatic door garage. 2-yr old ceramic top kitchen stove, fridge, washer/dryer, and monitor heater all included. Wood burning fireplace. 1,300-plus sq. ft. 12-month lease, credit check, and first and last month's rent required. No smoking. Available for rent first of week of 1/14. 1 block from municipal golf course. 43 Gladstone Road Contact: Louis 828.279.6895 rentAl in blAck moUntAin Newly refurbished , 3BR 2BA Full Basement, Residential or Commercial $950 mo. Call 828-669-8999 rentAls APArtments For rent Arden town villAs Accepting applications for 2BR Townhouse apartments. • Family oriented. • From $395/ month, varies depending on income. • Handicapped accessible units when available. Airport Road, Arden. Equal Housing Opportunity. Call (828) 684-1724. rn cAre mAnAGers Buncombe Co.,Tran/Polk/Hndrsn Co’s. The ideal candidate has 2+ years of Care Management experience; the position(s) requires an RN. If interested, please send resumes to HR@ aVaILaBLE pOSItIONS • meridiAn behAviorAl heAlth Child and Family Services Team Clinician Seeking licensed/Associate licensed therapist for an exciting opportunity to serve youth and their families through Intensive InHome and Basic Benefit Therapy. For more information contact Julie Durham-Defee, julie. durham-defee@meridianbhs. org cherokee county Peer support specialist Assertive Community Treatment Team – (ACTT) Position open for Peer Support Specialist to provide community-based services. Being a Peer Support Specialist provides an opportunity for individuals to transform their own personal lived experience with mental health and/or addiction challenges into a tool for inspiring hope for recovery in others. Applicants must demonstrate maturity in their own recovery process and must have basic computer skills. For further information, contact Erin Galloway, erin.galloway@meridianbhs.org haywood and Jackson county Recovery Education Center Peer support specialist Multiple positions open for Peer Support Specialist working with in our recoveryoriented programs for individuals with substance abuse and/or mental health challenges. Being a Peer Support Specialist provides an opportunity for an individual to transform personal lived experience into a tool for inspiring hope for recovery in others. Applicants must demonstrate maturity in their own recovery process and be willing to participate in an extensive training program prior to employment. For further information, please contact Reid Smithdeal, reid.smithdeal@ teAchinG/ edUcAtion dAnce teAcher ArtSpace Charter School, a K-8 public school near Asheville, NC, has an immediate opening for an innovative, energetic, dance teacher to join its arts integration team, beginning January 2014. Candidates must be willing to work in a collaborative environment and willing to teach various subjects through dance to students in grades kindergarten through eight. Dance Pets of the Week Kelsey• Adopt a Friend Save a Life Female, Domestic Shorthair/Mix 11 years old Kelsey is an 11 year old cat, she is a calm girl. Content to just keep an eye on the world around her, while is purrs quietly in your lap! This special little girl is front declawed so she must be an inside only kitty. If you are interested in a mellow companion, then stop in and visit with Kelsey! Braveheart • Male, Retriever/Lab Mix, 1 year old Braveheart is an amazing dog. He would do great in a family with kids of all ages, dogs, cats and even chickens. Braveheart is loyal and eager to please. He walks well on a leash, though he does like to stop and sniff the roses. Come adopt this seriously big ball of love today, he will be your new best friend forever! More Online! Goldie Treagle Snickers Emerson 14 Forever Friend Lane, Asheville, NC 828-761-2001 • AshevilleHumane.org mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 61 Asheville Humane Society FREEWILL ASTROLOGY ARIES (March 21-April 19) by Rob Brezny CAPRICORN (Dec. 22-Jan. 19) Derrick Brown has a poem titled the free-flowing intimacy you’d love to was missing. Legend tells us that in the early 19th century, Napoleon’s uncle found the lower half of it might mountainx.com Furioso.” But then again, you might. SCORPIO (Oct. 23-Nov. 21) Between 2002 and 2009, Buddhist monk Endo Mitsunaga spent 1,000 days meditating while making vegetables hungry folks. I gather there’s a comparable situation in your life, Sagittarius: unplucked resources and ignored treasures. In 2014, I hope you take dramatic action to harvest and use them. they communicate with each other. The coming year will be an excellent time to heal the disconnect and fix the glitch. GEMINI (May 21-June 20) A horticultural company in the U.K. is selling TomTato bushes that grow both cherry tomatoes and white potatoes. The magic was accomplished through handcrafted hybridization, not genetic engineering. I foresee a comparable marvel in your longterm future, Gemini. I’m not sure about the exact form it will take. Maybe you will create a product or situation that allows you to satisfy two different needs simultaneously. It’s possible you’ll find a way to express two of your talents in a single mode. Or perhaps you’ll. 62 DECEMBER 18 - DECEMBER 24, 2013 instruction experience and a bachelor’s degree is required. Dance education degree and NC licensure is preferred. Application deadline is January 15, 2014. Qualified applicants may email their resume to: resumes@artspacecharter. org Water International, American owned and made for over 50 years. • Patented and guaranteed. Call Stephen Houpis, 828-280-2254. CrystalClearWaterSystems.com Transportation MEDICAL TRANSPORTATION/CASINO TRIPS • Cherokee casinos weekly trips. Call for more info 828-215-0715 or visit us at: cesarfamilyservices. com/transportation.html Business Opportunities HELP WANTED Make extra money in our free ever popular homemailer program, includes valuable guidebook! Start immediately! Genuine! 1-888292-1120. (AAN CAN) Home Improvement Handy Man HIRE A HUSBAND Handyman Services. 31 years professional business practices. Trustworthy, quality results, reliability. $2 million liability insurance. References available. Free estimates. Stephen Houpis, (828) 280-2254. Xchange General Merchandise Heating & Cooling MAYBERRY HEATING AND COOLING Oil and Gas Furnaces • Heat Pumps and AC • • Radiant Floor Heating • • Solar Hot Water • Sales • Service • Installation. • Visa • MC • Discover. Call (828) 658-9145. HANDCRAFTED 12-STRING PARLOR GUITAR For sale by the builder, 12-string parlor guitar. Rich, deep, crystalline tone. Local Red Spruce top, African Sapele back and sides, handmade purfling. Includes solid-wood coffin case. Asking $4,000. Call for information or to arrange an appointment. (828)779-0590 Painting SOUTH ASHEVILLE CUSTOM PAINTING Residential • Interior • Exterior • Pressure Washing • Drywall/Plaster repair • Wallpaper removal • Excellent references • Free estimates • Reasonable Rates. Over 35 years experience. (828) 606-3874. Antiques & Collectibles MICKEY MOUSE Large collection of various antique, collectibles, posters, pictures and memorabilia. Call 1-410-3588470. Leave contact number. Tools & Machinery 2006 JOHN DEERE 5525 TRACTOR FOR SALE 2006 John Deere 5525 asking $9700, has cab heat air, 91HP, FWD, 540 PTO, sahberg5@ outlook.com / 919-727-9742. Announcements Announcements ADVERTISE your business or product in alternative papers across the U.S. for just $995/ week. New advertiser discount "Buy 3 Weeks, Get 1 Free" ads (AAN CAN) PREGNANT?) Wanted BUYING OLD PAPER MONEY Stocks, bonds, documents, etc. Fair, fast payment. Email your contact info and list to: buyingpapermoney@ gmail.com Services Caregivers NEED A BREAK OR A HOLIDAY JAUNT? Live-in caregiver for elderly/handicapped, available for Holiday respite care or longer term. 15 years experience. References provided. Call Judy: (828) 675-9075.home water test. WNC factory authorized dealer, for Hague Classes & Workshops Classes & Workshops CLAY CLASSES AT ODYSSEY CLAYWORKS Winter Clay Classes begin January 13. Designing Tableware, Animal Forms, Caffeine, Baskets With Handles And Feet, Intro To Handbuilding Part II, Pottery And Sculpture For Gardens and Landscapes, Wheel Throwing for Beginners, Bigger Pots Made Easy. Visit for details about our upcoming classes and workshops, or call 828285-0210. PiAno lessons Come learn in a welcoming studio with Ms. Farrell - patient, affirming, inspiring, fun, wellseasoned teacher with BM degree. Parking for parents/ space for waiting. farrellsylvest@gmail.com, 828-2327048. Oakley neighborhood (SE) thE nEw yoRk timEs cRosswoRD puzzLE Crossword edited by Will Shortz No.1113 Edited by Will Shortz 1 13 16 19 20 22 25 28 33 38 41 45 49 53 57 60 54 58 61 55 46 47 48 50 34 39 42 43 44 35 36 26 29 23 27 24 2 3 14 17 18 21 4 5 6 7 8 No. 1113 9 15 10 11 12 ACROSS 1 Beverages in the a.m. 4 9-Across buy 9 Company founded by a 17-year-old Swede 13 Young boxer 14 Cry of fear or hilarity 15 Housecat’s perch 16 Foofaraw 17 Recipe instruction #1 19 Slips and such 21 Tony of “Taxi” 22 Recipe instruction #2 25 Owners of an infamous cow 27 Banshee’s cry 33 Recipe instruction #3 38 Tarzan creator’s monogram 39 Bell Labs operating system 40 Nifty 41 Seller’s caveat 42 Renaissance, literally 45 Recipe instruction #4 49 Tilter’s weapon 50 Renders unnecessary 53 Recipe instruction #5 56 An ex of Frank 57 Painter Mondrian 61 What you get when you blend the results of this puzzle’s recipe instructions 62 Bugling beast DOWN 1 Gem of a girl? 2 Dench who played Elizabeth I 3 Squarish TV toon 4 Minimum age for a U.S. senator 5 ___ Army (golf fans of old) 6 Muscle strengthened by curls, informally 7 Van Cleef of “High Noon” 8 Heart test letters 9 Lost Tribes’ land 10 Ceramists’ fixtures 11 Pupil of ’enry ’iggins 12 ___ Highway (historic route to Delta Junction) 14 Lipstick slip 18 Be a fan of 20 Get, as a concept 23 Mil. truant 24 Brother of Fidel 25 As soon as 26 Cowardly Lion portrayer 29 Tough spot 30 Fudge, say 31 Patrolman’s rounds mind, body, sPirit bodywork Pets lost Pets A lost or FoUnd Pet? Free service. If you have lost or found a pet in WNC, post your listing here: 30 37 40 31 32 Pet services Asheville Pet sitters Dependable, loving care while you're away. Reasonable rates. Call Sandy (828) 215-7232. leAvinG town For the holidAys? Excellent references. • Reasonable rates. Call Flo: 298-5649. tender loving care cat sitting service. heAlth & Fitness colonics $20 oFF For First time clients Intestinal cleansing can eliminate years of accumulated toxic wastes and stop unnecessary recycling of poisons. Helps nutrition absorption, weight reduction, and more. ascendingcolonhydrotherapy.com (828) 284-6149 men's liFestyle medicAtions FDA Approved - USA Pharmacies. Remote TeleMedicine Physician. Safe • Secure • Discreet. Calls Taken 7 days per week. Call ViaMedic: 888-786-0945. Trusted Since 1998. (AAN CAN) 51 56 59 62 52 AUtomotive AUtos For sAle cAsh For cArs: Any Car/ Truck. Running or Not! Top Dollar Paid. We Come To You! Call For Instant Offer: 1-888420-3808 (AAN CAN) 58 Term of address for a nobleman 28 Slaps the cuffs on 29 Number of pecks 59 Altoids container in a 34-Down 60 Impersonal letter 30 U.K. bestowal starter PUZZLE BY JEAN O’CONOR 32 O.T. book read during Purim 34 Farmer’s basketful, maybe 35 Have ___ (surreptitiously imbibe) 36 Emphatic assent, in Baja 37 “The Red Tent” author Diamant 41 Items at a haberdashery 42 PC start-over 43 “Green,” in product names 44 Physique 45 Sounds of appreciation 46 Pizza cuts, essentially 47 Hypnotized 48 Year-end airs 51 Bad to the bone 52 Put in the cup, as a golf ball 54 Mischievous sort 55 Contend AUtomotive services we'll Fix it AUtomotive • Honda and Acura repair. Half price repair and service. ASE and factory certified. Located in the Weaverville area, off exit 15. Please call (828) 275-6063 for appointment. ANSWER TO PREVIOUS PUZZLE For mUsiciAns Puzzle J E T S to R Previous A T E D G H O P E A R P A P O G E E D R U G S M U S S A B EB RA R D I RG AH F TT D O O HD NG OE R C O RL I T O N I B S H MO R W A SR AD I D I T CNMO OT N A S D I I TR ST UY PD AMNOCN I T NE G C C A AN NT OF NO S O L I BD E E R S E T T S O E R X A A T M E S S A AN LE LR T CO AT D M D IE S MO O L I AT RI M O AN DD A E R B Y B E E S T EW E O ER NA YL OS U A NBD AM LE E DN UA RN A TN E S T A Y B E BA ER D H D A U T N K GI IN SD O N CU A TMS A R O O TR AO L T KA I N S T AO S I S RW AA E L D W H A L E M O M G M D O U B VL EE I DNU T C H D O N T S L E E OC N AS RI ET E T UNR AB TO O A R G O Y D S S ST PA AT RE S A T I A NR 2 I D H E S S A S H Y S E G A L For answers, call 1-900-285-5656, $1.49 a minute; or, with a credit card, 1-800-814-5554. For answers: Call 1-900-285-5656, online subscriptions: Today’s puzzle Annual subscriptions are the bestthan of Sunday and more 2,000 past puzzles, $1.49 a minute; or, with a available credit card, for nytimes.com/crosswords ($39.95 a 1-800-814-5554. crosswords from the last 50 years: 1-888-7-ACROSS. year). AT&T users: Text NYTX 386 to Annual subscriptions are to available for download puzzles, or visit share tips: nytimes.com/wordplay. the best of Sunday crosswords from the information. nytimes.com/mobilexword for more last 50 years: 1-888-7-ACROSS. Online subscriptions: Today’s puzzle crosswords and more than for 2,000 young past solvers: nytimes.com/learning/xwords. At&t users: Text NYTX to 386 to($39.95 puzzles, nytimes.com/crosswords a year). download puzzles, or visit nytimes.com/ Share tips: nytimes.com/wordplay. mobilexword for more information. Crosswords for young solvers: nytimes.com/learning/xwords. Furniture Magician • Cabinet Refacing • Furniture Repair • Seat Caning • Antique Restoration • Custom Furniture & Cabinetry (828) 669-4625 Paul Caron • Black Mountain mountainx.com DEcEmBER 18 - DEcEmBER 24, 2013 63
http://issuu.com/mountainx/docs/12.18.13_mx_opt?e=1201348/6043341
CC-MAIN-2014-49
refinedweb
26,288
63.7
Save 37% off Spark in Action, 2nd Ed. Just enter code fccperrin into the discount code box at checkout at manning.com. In part 1 we dealt with ingesting data from a CSV file, and in part 2 we ingested from a JSON file. In this part we’re going to talk about ingesting data from an XML file. Ingesting an XML file In this section, we’ll ingest an XML document containing the NASA patents, display some patents and the dataframes’ schema. Note that, in this context, the schema isn’t an XML Schema (or XSD), but a dataframe schema. Quite a few years ago, when I discovered XML, I thought it could become some unified lingua franca of data exchange. XML is: - Structured. - Extensible. - Self-describing. - Embeds validation rules through DTD (Document Type Definition) and XSD (XML Schema Definition). - XML is a W3 standard. You can read more about XML at:. - XML looks like HTML and any other markup-language since SGML: <rootElement> <element attribute="attribute’s value"> Some payload in a text element </element> <element type="without sub nodes"/> </rootElement> Unfortunately, XML is verbose and it’s harder to read than JSON. Nevertheless, XML is still widely used and Apache Spark ingests it nicely. Figure 1 shows a fragment of the XML file and illustrates the process. Figure 1 Spark ingests an XML file containing NASA patents. Spark uses an external plugin, provided by Databricks, to perform the ingestion. Spark then displays records and the dataframe schema (not to be confused with an XML Schema). For this XML example, you’re going to ingest the NASA patents. NASA offers various open datasets at. Listing 1 shows a record of this file. You can download the NASA patents dataset from:. For this example, I used Spark v2.2.0 on MacOS X v 10.12.6 with Java 8 as well as Databricks’ XML parser v0.4.1. The dataset was downloaded in January 2018. Listing 1 – NASA patents (excerpt) <response> ❶ <row ❷ ❸ <center>NASA Ames Research Center</center> <status>Issued</status> <case_number>ARC-14048-1</case_number> <patent_number>5694939</patent_number> <application_sn>08/543,093</application_sn> <title>Autogenic-Feedback Training Exercise Method & System</title> <patent_expiration_date>2015-10-03T00:00:00</patent_expiration_date> </row> … </response> ❶ The root element of your list of patents ❷ The element (or tag) designing our record. ❸ Attributes are prefixed by one underscore (_) Desired output Listing 2 shows the output of a dataframes’ data and schema after ingesting the NASA patents as an XML document. You can see that the attributes are prefixed by an underscore ( _) (attributes had already an underscore as a prefix in the original document, and they have two now) and the element’s name is used as a column name. Listing 2 – NASA patents in a dataframe +--------------------+----+----------+--------------------+--------------+… | __address|__id|__position| __uuid|application_sn|… +--------------------+----+----------+--------------------+--------------+… | 407| 407|2311F785-C00F-422...| 13/033,085|… | 1| 1|BAC69188-84A6-4D2...| 08/543,093|… | 2| 2|23D6A5BD-26E2-42D...| 09/017,519|… | 3| 3|F8052701-E520-43A...| 10/874,003|… | 4| 4|20A4C4A9-EEB6-45D...| 09/652,299|… +--------------------+----+----------+--------------------+--------------+… only showing top 5 rows root |-- __address: string (nullable = true) |-- __id: long (nullable = true) |-- __position: long (nullable = true) |-- __uuid: string (nullable = true) |-- application_sn: string (nullable = true) |-- case_number: string (nullable = true) |-- center: string (nullable = true) |-- patent_expiration_date: string (nullable = true) |-- patent_number: string (nullable = true) |-- status: string (nullable = true) |-- title: string (nullable = true) Code As usual, our code starts by a main() method, which is calling a start() method which creates a Spark session. Listing 3 is the Java code needed to ingest the NASA XML file, then display five records and its schema. Listing 3 – XmlToDataframeApp.java package net.jgp.books.sparkWithJava.ch07.lab_300.xml_ingestion; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Row; import org.apache.spark.sql.SparkSession; public class XmlToDataframeApp { public static void main(String[] args) { XmlToDataframeApp app = new XmlToDataframeApp(); app.start(); } private void start() { SparkSession spark = SparkSession.builder() .appName("XML to Dataframe") .master("local") .getOrCreate(); Dataset<Row> df = spark.read().format("xml") ❶ .option("rowTag", "row") ❷ .load("data/nasa-patents.xml"); df.show(5); df.printSchema(); } } ❶ Specify XML as the format. Case doesn’t matter. ❷ The element or tag that indicates a record in the XML file. I had to modify the original NASA document as it contained an element with the same name wrapping the records. Unfortunately, as of now, Spark can’t do it for us. The original structure was: <response> <row> <row _id="1" …> … </row> … </row> </response> If the first child of response was rows or anything other than row, I wouldn’t have had to remove it (another option is to rename it). As the parser isn’t part of the standard Spark distribution, you need to add it to the pom.xml, as described in listing 4. To ingest XML, use a product called spark-xml_2.11 (the artifact), by a company called Databricks, in version 0.4.1. Listing 4 – pom.xml to ingest XML (excerpt) … <properties> … <scala.version>2.11</scala.version> ❶ <spark-xml.version>0.4.1</spark-xml.version> ❷ </properties> <dependencies> … <dependency> <groupId>com.databricks</groupId> <artifactId>spark-xml_${scala.version}</artifactId> ❸ <version>${spark-xml.version}</version> ❹ <exclusions> ❺ <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> </exclusion> </exclusions> </dependency> … </dependencies> … ❶ Scala version on which the XML is built ❷ Version of the XML parser ❸ Equivalent to spark-xml_2.11 ❹ Equivalent to 0.4.1 ❺ Optional: I took a habit of excluding the logger from other packages to have a better control over the one I use More details on Spark XML can be found at. Not too complex! Stay tuned for part 4. If you want to learn more about the book, check it out liveBook here and see this slide deck.
https://freecontent.manning.com/ingesting-data-from-files-with-spark-part-3/?a_aid=jgp
CC-MAIN-2019-18
refinedweb
974
58.99
/ Published in: ActionScript 3 I've updated this snippet to remove the timer and null it after the function is repeated repeat times. You use it like this: delay(functionname, [param1, param2], 350, 3); This will call functionname three times with a 350 millisecond delay between each call and pass param1 and param2 to the function. If you need to call a function that doesn't accept parameters, pass an empty array [] to the second argument. Expand | Embed | Plain Text - import flash.events.TimerEvent; - import flash.utils.Timer; - - /** - * delay function - * a quick and easy delay function that can call a function with parameters. configurable - * with delay time and repeat frequency - * - * @param func:Function The function to call when timer is complete - * @param params:Array An array of parameters to pass to the function - * @param delay:int [OPTIONAL] The number of milliseconds to wait before running the function - * @param repeat:int [OPTIONAL] The number of times the function should repeat - */ - private function delay(func:Function, params:Array, delay:int = 350, repeat:int = 1):void - { - var f:Function; - var timer:Timer = new Timer(delay, repeat); - timer.addEventListener(TimerEvent.TIMER, f = function():void - { - func.apply(null, params); - if (timer.currentCount == repeat) - { - timer.removeEventListener(TimerEvent.TIMER, f); - timer = null; - } - }); - timer.start(); - } Report this snippet Tweet Be sure to include the timer classes import flash.events.TimerEvent; import flash.utils.Timer; This is a function that I've needed but wasn't available in AS3. Basically, we create a timer, add a listener to it, and start the timer. In the listener, we run the supplied function with the supplied parameters, then remove the listener. UPDATE: I've had to remove the timer = nulldeclaration because that was clearing the timervariable before additional iterations were completed. It worked fine when repeatwas set to 1, but when repeatwas greater than 1, timer was nulled out before the second iteration could be started. UPDATE: I've fixed the timer = nullissue. The function now works as expected. This was Exactly what I was looking for ..... thank you very much!! ^_^
http://snipplr.com/view/51631/
CC-MAIN-2016-36
refinedweb
341
57.57
WalletConnect implementation for Python wallets Project description pyWalletConnect A WalletConnect implementation for wallets in Python A Python3 library to link a wallet with a WalletConnect web3 app. This library connects a Python wallet with a web3 app online, using the WalletConnect v1 standard. Thanks to WalletConnect, a Dapp is able to send JSON-RPC call requests to be handled by the Wallet, sign requests for transactions or messages remotely. Using WalletConnect, the wallet is a JSON-RPC service that the dapp can query through an encrypted tunnel and an online relay. This library is built for the wallet part, which establishes a link with the dapp and receives requests form a web3 app. pyWalletConnect manages automatically on its own all the WalletConnect stack : WalletConnect | JSON-RPC | EncryptedTunnel | WebSocket | HTTP | TLS | Socket Installation and requirements Works with Python >= 3.6. Installation of this library Easiest way : python3 -m pip install pyWalletConnect From sources, download and run in this directory : python3 -m pip install . Use Instanciate with pywalletconnect.WCClient.from_wc_uri, then use methods functions of this object. Basic example : from pywalletconnect import WCClient # Input the wc URI string_uri = input("Input the WalletConnect URI : ") wallet_dapp = WCClient.from_wc_uri(string_uri) # Wait for the sessionRequest info try: req_id, req_chain_id, request_info = wallet_dapp.open_session() except WCClientInvalidOption as exc: # In case error in the wc URI provided wallet_dapp.close() raise InvalidOption(exc) if req_chain_id != account.chainID: # Chain id mismatch wallet_dapp.close() raise InvalidOption("Chain ID from Dapp is not the same as the wallet.") # Display to the user request details provided by the Dapp. user_ok = input(f"WalletConnect link request from : {request_info['name']}. Approve? [y/N]") if user_ok.lower() == "y": # User approved wallet_dapp.reply_session_request(req_id, account.chainID, account.address) # Now the session with the Dapp is opened <...> else: # User rejected wallet_dapp.close() raise UserInteration("user rejected the dapp connection request.") pyWalletConnect maintains a TLS WebSocket opened with the host relay. It builds an internal pool of received request messages from the dapp. Once the session is opened, you can read the pending messages received from the Dapp from time to time. And then your wallet app can process them. Use a deamon thread timer for example, to call the get_message() method in a short time frequency. 3-6 seconds is an acceptable delay. This can also performed in a blocking for loop with a sleeping time. Then process the Dapp queries for further user wallet actions. Remember to keep track of the request id, as it is needed for .reply(req_id, result) ultimately when sending the processing result back to the dapp service. One way is to provide the id in argument in your processing methods. Also this can be done with global or shared parameters. def process_sendtransaction(call_id, tx): # Processing the RPC query eth_sendTransaction # Collect the user approval about the tx query < Accept (tx) ? > # if approved : # Build and sign the provided transaction <...> # Broadcast the tx # Provide the transaction id as result result = "0x..." # Tx id wallet_dapp.reply(call_id, result) def watch_messages(): # Watch for messages received. # For WalletConnect calls reading. # Read all the message requests received from the dapp. # Then dispatch to the wallet service handlers. # get_message gives (id, method, params) or (None, "", []) wc_message = wallet_dapp.get_message() # Loop in the waiting messages pool, until depleted while wc_message[0] is not None: # Read a WalletConnect call message available id_request = wc_message[0] method = wc_message[1] parameters = wc_message[2] if "wc_sessionUpdate" == method: if parameters[0].get("approved") is False: raise Exception("Disconnected by the Dapp.") # Dispatch query processing elif "eth_signTypedData" == method: process_signtypeddata(id_request, parameters[0]) elif "eth_sendTransaction" == method: tx_to_sign = parameters[0] process_sendtransaction(id_request, tx_to_sign) elif "eth_sign" == method: process_signtransaction(parameters[0]) <...> # Next loop wc_message = wallet_dapp.get_message() # GUI timer repeated or threading deamon # Will call watch_messages every 4 seconds apptimer = Timer(4000) # Call watch_messages when expires periodically apptimer.notify = watch_messages See also the RPC methods in WalletConnect to know more about the expected result regarding a specific RPC call. Interface methods of WCClient WCClient.from_wc_uri( wc_uri_str ) Create a WalletConnect wallet client from a wc URI. wc_uri_str : the wc full EIP1328 URI provided by the Dapp. You need to call open_session immediately after to get the session request info. Close the underlying WebSocket connection. .get_relay_url() Give the URL of the WebSocket relay bridge. .get_message() Get a RPC call message from the internal waiting list. pyWalletConnect maintains an internal pool of received request messages from the dapp. And this get_message method pops out a message in a FIFO manner : the first method call provides the oldest (first) received message. It can be used like a pump : call get_message() until an empty response. Because it reads a message from the receiving bucket one by one. Return : (RPCid, method, params) or (None, "", []) when no data were received since the last call (or from the initial session connection). Non-blocking, so always returns immediately. .reply( req_id, result_str ) Send a RPC response to the webapp (through the relay). req_id is the JSON-RPC id of the corresponding query request, where the result belongs to. One must kept track this id from the get_message, up to this reply. So a reply result is given back with its associated call query id. result_str is the result field to provide in the RPC result response. .open_session() Start a WalletConnect session : wait for the session call request message. Must be called right after a WCClient creation. Returns : (message RPCid, chain ID, peerMeta data object). Or throws WalletConnectClientException("sessionRequest timeout") after 8 seconds and no sessionRequest received. reply_session_request( msg_id, chain_id, account_address ) Send the sessionRequest result, when user approved the connection session request in the wallet. msg_id is the RPC id of the sessionRequest call provided by open_session.. Support Open an issue in the Github repository for help about its use. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyWalletConnect/1.1.0/
CC-MAIN-2021-43
refinedweb
967
50.84
Sometimes you will find yourself wanting to resize a photo. I usually want to do this for photos that I want to email or post on a website since some of my images can be quite large. Normal people use an image editor. I usually do as well, but for fun I thought I would look into how to do it with the Python programming language. The quickest way to do this is to use the Pillow package which you can install with pip. Once you have it, open up your favorite code editor and try the following code: from PIL import Image def resize_image(input_image_path, output_image_path, size): original_image = Image.open(input_image_path) width, height = original_image.size print('The original image size is {wide} wide x {height} ' 'high'.format(wide=width, height=height)) resized_image = original_image.resize(size) width, height = resized_image.size print('The resized image size is {wide} wide x {height} ' 'high'.format(wide=width, height=height)) resized_image.show() resized_image.save(output_image_path) if __name__ == '__main__': resize_image(input_image_path='caterpillar.jpg', output_image_path='caterpillar_small.jpg', size=(800, 400)) Here we import the Image class from the Pillow package. Next we have a function that takes 3 arguments: The location of the file we want to open, the location we want to save the resized image and a tuple that represents the new size the image should be where the tuple is the width and height respectively. Next we open our image and print out its size. Then we call the image object’s resize() method with the size tuple we passed in. Finally we grab the new siz, print it out and then show the image before saving the resized photo. Here is what it looks like now: As you can see, the resize() method doesn’t do any kind of scaling. We will look t how to do that next! Scaling an Image Most of the time, you won’t want to resize your image like we did in the previous example unless you want to write a scaling method. The problem with the previous method is that it does not maintain the photo’s aspect ratio when resizing. So instead of resizing, you can just use the thumbnail() method. Let’s take a look: from PIL import Image def scale_image(input_image_path, output_image_path, width=None, height=None ): original_image = Image.open(input_image_path) w, h = original_image.size print('The original image size is {wide} wide x {height} ' 'high'.format(wide=w, height=h)) if width and height: max_size = (width, height) elif width: max_size = (width, h) elif height: max_size = (w, height) else: # No width or height specified raise RuntimeError('Width or height required!') original_image.thumbnail(max_size, Image.ANTIALIAS) original_image.save(output_image_path) scaled_image = Image.open(output_image_path) width, height = scaled_image.size print('The scaled image size is {wide} wide x {height} ' 'high'.format(wide=width, height=height)) if __name__ == '__main__': scale_image(input_image_path='caterpillar.jpg', output_image_path='caterpillar_scaled.jpg', width=800) Here we allow the programmer to pass in the input and output paths as well as our max width and height. We then use a conditional to determine what our max size should be and then we call the thumbnail() method on our open image object. We also pass in the Image.ANTIALIAS flag which will apply a high quality down sampling filter which results in a better image. Finally we open the newly saved scaled image and print out its size to compare with the original size. If you open up the scaled image, you will see that the aspect ratio of the photo was maintained. Wrapping Up Playing around with the Pillow package is a lot of fun! In this article you learned how to resize an image and how to scale a photo while maintaining its aspect ratio. You can now use this knowledge to create a function that could iterate over a folder and create thumbnails of all the photos in that folder or you might create a simple photo viewing application where this sort of capability might be handy to have. Related Reading - StackOverflow: Python / Pillow: How to scale an image - Convert a Photo to Black and White in Python - How to Rotate / Mirror Photos with Python - How to Crop a Photo with Python
http://www.blog.pythonlibrary.org/2017/10/12/how-to-resize-a-photo-with-python/
CC-MAIN-2018-13
refinedweb
700
64.2
This Source Code Source Code #include <stdio.h>"); } } int getDayNumber(int dd,int mm,int yy){ //retuns the day number static int t[] = {0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4}; yy -= mm < 3; return (yy + yy/4 - yy/100 + yy/400 + t[mm-1] + dd) % 7; } int main(){ int dd, mm, yy; printf("Enter day month and year (dd mm yyyy): "); scanf("%d%d%d", &dd, &mm, &yy); printf("\n The Corresponding day is %s", getName(getDayNumber(dd, mm, yy))); } Output Enter the day month and year (dd mm yyyy): 10 03 1991 Thank you sir for this program .... but i thought you must know that this program has a bug of giving out results of undefined dates, such as 29th of February 2015 (that doesnt even exits) so please reply if you can help me out. Thank you Write a program that tells the following about days of week. If day is from 1 - 5 then it is weekday If its 6th day (Saturday) then it is weekend If its 7th day (Sunday) then it is holiday.
http://www.programming-techniques.com/2013/02/cc-program-to-ditermine-day-of-week.html
CC-MAIN-2017-17
refinedweb
183
71.41
Import eclipse project with other package name ? - Eclipse / Android I have project in eclipse (done app). Now I want copy this project, but with other package name. I just want do new app, that will have only small changes(some images and database). How I can import old project with other package name or what should i do in this situation ? Answers - Copy project files to new location - Import new project - Right click on you new project then select -> Android Tools|Rename application package Need Your Help j2me service discovery and discoverable same time java java-me mobile client-server bluetoothI try to write a simple client/server application (all application is a bluetooth service and client). The client code find the bluetooth devices and register in to the local db. But when the inquiry Specifying a proxy to use with DotNetOpenID proxy openid dotnetopenauthI'm using DotNetOpenID to provide relying party OpenID support for our website. All external requests have to be brokered via a proxy server, and I'm looking for a way to tell DotNetOpenID to use t... The type or namespace 'Mvc' does not exist in the namespace 'System.Web' asp.net asp.net-mvc visual-studioI recently installed VS.NET 2012 and moved my ASP.NET MVC 4 project over. However, on building the solution I get hundreds of errors with :
http://unixresources.net/faq/12355665.shtml
CC-MAIN-2018-43
refinedweb
226
64.61
: JavaFX Insider's Guide to Mixing Swing and JavaFX Ibn Saeed Ranch Hand Posts: 45 posted 11 years ago 1 Hello Just came upon this Insider's Guide to Mixing Swing and JavaFX Injecting the "extreme" into this year's GUI Makeover session at JavaOne, Jasper's finale of blowing up junk email with rocket explosions was really fun and of course completely absurd. Looking beyond the antics, our message in the session was actually very practical: You can blend the strengths of Swing and JavaFX to give your Java applications that visual edge that users are coming to expect from modern clients. If you weren't able to attend the session you can download the GUI Makeover slides (I couldn't resisting drawing a mustache & beard on Chet's face in slide 6). If you're a Swing developer who's both skeptical and curious about JavaFX, I've broken down the process into 10 steps for integrating your Swing components into a JavaFX application. Note: We also recognize the need for the inverse (embedding a JavaFX scene into a Swing app), however that is not supported with 1.2 as it requires a more formal mechanism for manipulating JavaFX objects from Java code. The structure of these JavaFX objects is a moving target (for some very good reasons) and we're not ready to lock that down (yet). However,as with most software, this can be hacked (see Rich & Jasper's hack on Josh's blog) but know that you are laying compatibility down on the tracks if you go there. Honestly, you can pull large portions of your Swing GUIs into JavaFX very easily... 0. Find Your Design Since every good list-of-10 should go to 11, I've inserted a Step 0 (really the most important step of all): use all means at your disposal to find a good design. If you arn't lucky enough to have that rare engineer who's good at both coding and visual design (e.g. a Jasper Potts or a Romain Guy) and you can't afford to hire a good designer, then study the interfaces you respect and "borrow" from them. The nicest interfaces use effects and animation in subtle ways to bring fluidity to the UI. Here are some random links on the subject (and I'd welcome suggested additions to this list): * Animation: From Cartoons to the User Interface (from 1995, Self language defunct, but principles still relevant) * Interfaces That Flow: Transitions As Design Elements * Excerpts from Josef Alber's Teachings on Color Theory 1. Touch Base with the Language Turns out that most of the developers I encounter who have a negative impression of the JavaFX scripting language have actually never tried it. A little more than a year ago I was there too; it looked weird and I couldn't imagine why I'd ever want to use anything but Java to code my apps. But it turns out that using a language designed for interface construction is amazingly gratifying. I'll admit there was a little hump to get over in learning to think declaratively vs. procedurally, but once that clicked, I found I can't imagine living without the JavaFX script features of binding, function pointers, and sequences. So suspend your disbelief for just a moment and browse the tutorial: JavaFX Language Tutorial Another critique I've heard is that the declarative, 'scripty' nature of JavaFX leads to unwieldy and hard-to-maintain code; to that I say that it is possible to write crappy code in any language and that JavaFX script does provide an object-oriented model well-suited to the discipline of nice class structure and well-factored code. Just keep using the skills that Java taught you. 2. Download the SDK Just do it. Go to javafx.com and click on the orange "Download Now" link. I recommend installing the NetBeans 6.5.1 bundle to get the JavaFX plugin automatically. 3. Create NetBeans Project You'll want to create a new JavaFX NetBeans project (File -> New Project...select "JavaFX" type). [Note: In the "Create Main File" checkbox, I usually change "Main" to a more descriptive classname for my app because I find it annoying to end up with a dozen "Main.fx" source tabs in NetBeans]. Once your project is there, you can just copy your existing Swing app source code into the "src" directory. For GUI Makeover, we placed the swing sources in their own subpackage for cleanliness. Another option would be to keep your Swing app source in its own Java project and create a jar file which your JavaFX project could use. In our case it was simpler to keep it all in a single project since we wanted to also make minor adjustments to the Swing code (tweaking colors, fonts, etc). I'll take a moment to emphasize that the core of any serious application should remain in Java and I'm advocating that only the client veneer should be migrated to JavaFX (a good model for separation of concerns anyways). There are numerous reasons for this, but the glaring one is that JavaFX is single-threaded -- any code that needs to run off the main gui thread must execute in Java. 4. Create a Stage The root of all JavaFX interfaces is a Stage . When you created your JavaFX NetBeans project, it most likely generated a default Stage declaration in your main source file: Stage { title: "Application title" width: 250 height: 80 scene: Scene { content: [ Text { ... } ] } } Customize that stage as desired. Stage has both a style and an opacity variable to make it really easy to create non-rectangular UIs. If you choose a StageStyle of TRANSPARENT or UNDECORATED you will have to provide your own input handling for managing the toplevel window in the user's environment (on desktop). 5. Define the Layout You'll need to replace the content of the stage's Scene with your own UI. In our case with GUI Makeover, our stage was composed of a mixture of FX-node-based panels and Swing components. Since our designer tool isn't yet shipping, you'll have to do this by hand and here's where I'll do a bit of hand-waving, as layout in any sophisticated graphical toolkit is impossible to cover in a paragraph or two. For an overview of JavaFX scene graph layout, you can check out the JavaOne UI Controls session slides where I presented layout in detail (starting at slide 61 -- warning: it's a large slide deck!). My JavaFX layout blog will also be updated soon to reflect changes in 1.2. Starting simply, you can look at the container classes in javafx.scene.layout ( Flow , Tile , HBox , VBox , Stack , and Panel ). It's amazing how far one can get by nesting HBox and VBox . If you want the more sophisticated power of a grid style layout, you can use either Stephen Chin's Grid container or MigLayout (ported by Dean Iverson) in the JFXtras library. 6. Embed Swing Components Any Swing component can be embedded in a JavaFX scene graph using the SwingComponent wrap() function. This conveniently allows you to directly leverage those Swing components which you've already configured, customized, and hooked to your application data; all that Java code can remain happily unmodified. Once you've created the structure of your scene's layout, you can pull your Swing components into the appropriate locations. Here's a simple example: def swingTable = new JTable(); Stage { scene: Scene { content: VBox { content: [ SwingComponent.wrap(swingTable), // other FX nodes ] } } } The wrapped Swing component becomes a leaf node in the scene graph and as such can be manipulated like any other node -- rotated, scaled, skewed, made transluscent, etc. Through some hidden magic which transforms input events, the Swing component will just work, oblivious to its new 2D housing. The limitation here is that the internals of the Swing component are not nodes in the scene and cannot be manipulated individually using JavaFX graphics ops. It's also worth noting that the entire Swing component can be made transulcent by setting the opacity variable on the wrapper, but this is not the same as having the Swing component's background be transluscent. Another important restriction is that although FX script can freely invoke any of the Java apis on the Swing components, JavaFX script's language features (object literals, binding, functions) cannot be used directly with the component. This restriction can be mitigated by creating specialized FX classes which expose the Swing component apis as variables and functions. Many such classes are already provided in the javafx.ext.swing package (e.g. SwingButton ) and the techniques for doing so are certainly worthy of another blog. For our GUI Makeover client, we embedded a Swing JList and JTextArea, neither of which yet have pure FX counterparts. The JList already had a really nice custom cell renderer and we only had to modify the colors to better match the new visual design created by Jasper. 7. Hook up Listeners Although the magic of JavaFX's bind capability is not available on the embedded Swing components, it turns out to be easy to hook up appropriate listeners to those components within JavaFX script. In the GUI Makeover session, I showed how to create a JavaFX class which extends a Java Listener interface, however, after the session, Richard Bair showed me some syntactic sugar which makes this even easier: // appcontroller is instance of Java class maintaining various app model state // No explicit FX listener extension class required! appcontroller.addPropertyChangeListener(PropertyChangeListener { override function propertyChange(event:PropertyChangeEvent):Void { if (event.getPropertyName().equals("selectedMailBox")) { // handle property change... } } }); 8. Add Effects Now the fun begins. You no longer have to be a 2D mathematician to add effects like drop shadows, lighting affects, or box blurs into your interface. Chris Campbell has created the javafx.scene.effect and javafx.scene.effect.light packages in JavaFX that makes such visuals do-able often with just a single line of code. To mention just a subset.... DropShadow , Glow , BoxBlur , Reflection , SpotLight . For GUI makeover, we put a reflection on our toolbar buttons: def reflection = Reflection { fraction: 0.33 }; // effects can be shared! composeButtonLabel = VBox { spacing: 4 content: bind [ ImageView { image: Image { url: "{__DIR__}resources/compose.png"} effect: reflection cache: true // caching good idea for static effects! }, label = Text { ... } ] } You can achieve such effects in Swing (sans JavaFX) using clever Java2D imaging tricks, but the JavaFX language and scene graph capabilities make it vastly easier. What this means is that with very little effort you can experiment to explore what looks good rather than spending a bunch of time getting the math right to discover it wasn't what you wanted (I say this from personal experience :-). 9. Add Movement (but don't over-do it) Let's face it - the iPhone has raised the bar on interface design. The use of nice transitions to guide your users is now commonplace and expected. Obviously, as we proved with flying rockets in the Makeover session, animation can be taken too far (most engineers I know dream of being gamers) and I recently came across a blog post, flying pixels , which sheds some sensible light on the subject. However, adding subtle animation to give texture, guidance, and pleasure to your users is easy with JavaFX. The goods are in the javafx.animation and javafx.animation.transition packages. Timeline is the driver behind the general animation framework, however many common transitions are greatly simplified by using the canned transition classes, such as FadeTransition , RotateTransition , and PathTransition . In GUI makeover we added rollover effects to our toolbar buttons as follows: var label:Text; // label on toolbar button var fadein:FadeTransition = FadeTransition { node: bind label fromValue: 0 toValue: 1.0 duration: 100ms // fade-in is fast, don't want UI to feel sluggish }; var fadeout:FadeTransition = FadeTransition { node: bind label fromValue: 1.0 toValue: 0.0 duration: 500ms // fade-out is slower, like a trailing thought.... }; composeButtonLabel = VBox { content: bind [ label = Text { ... opacity: 0.0 // label doesn't show until mouse-over } ] onMouseEntered:function(event:MouseEvent):Void { fadein.playFromStart(); } onMouseExited:function(event:MouseEvent):Void { fadeout.playFromStart(); } } And don't forget the rule of thumb that the best transitions are subliminal and arn't even noticed by your users. 10. Draw Your own Conclusions Skepticism is healthy -- especially in this internet age where we're all peddling something (technology, opinions, egos, tattered copies of The Mythical Man Month). But if you're a Swing developer looking to blend your Java code and skills with next-generation graphics, I'd encourage you to try JavaFX. You can even take your Java and Swing code with you. reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads Beginner JavaFX Couple Basic JavaFX Questions JavaFX : Simply binding and property to draw rectangle How does JavaFX relate in the WorkPlace. Javafx TextField on edit event the value of filler advertising in 2020 More...
https://www.coderanch.com/t/453426/java/Insider-Guide-Mixing-Swing-JavaFX
CC-MAIN-2020-40
refinedweb
2,172
59.74
Hello and welcome back. In this chapter, we will create a small sports score application with the help of the sports.py module. We will create a simple application to retrieve the score of the NBA result for any two particular teams and prints it on the tkinter’s text widget. Before we start, you will need to download the sports.py module to your own computer. Since the module is on pypi.org all you need is to open up your windows command prompt and type in below line to download that module! As I have mentioned earlier, I am now using visual studio 2019 to develop this new python project. pip install sports.py Now just start the visual studio 2019 then type the below code into the code editor. import sports import json from tkinter import * import tkinter.ttk as tk win = Tk() # Create tk instance win.title("NBA") # Add a title win.resizable(0, 0) # Disable resizing the GUI win.configure(background='white') # change window background color selectorFrame = Frame(win, background="white") # create top frame to hold team 1 vs team 2 combobox selectorFrame.pack(anchor = "nw", pady = 2, padx=10) match_label = Label(selectorFrame, text = "Select Team 1 vs Team 2 :", background="white") match_label.pack(anchor="w") # the team label # Create a combo box for team 1 team1 = tk.Combobox(selectorFrame) team1.pack(side = LEFT, padx=3) # Create a combo box for team 2 team2 = tk.Combobox(selectorFrame) team2.pack(side = LEFT, padx=3) s = StringVar() # create string variable # create match frame and text widget to display the incoming match data matchFrame = Frame(win) matchFrame.pack(side=TOP) match = Label(matchFrame) match.pack() text_widget = Text(match, fg='white', background='black') text_widget.pack() s.set("Click the find button to find out the match result") text_widget.insert(END, s.get()) buttonFrame = Frame(win) # create a bottom frame to hold the find button buttonFrame.pack(side = BOTTOM, fill=X, pady = 6) # fill up the combo boxes with the team name data from the text file team_tuple = tuple() f = open("TextFile1.txt", "r") for line in f.readlines(): line = line.replace('\n', '') team_tuple += (line, ) f.close() team1["values"] = team_tuple team1.current(1) team2["values"] = team_tuple team2.current(0) def get_match(): # return the recent match of team 1 vs team 2 try: match = sports.get_match(sports.BASKETBALL, team1.get(), team2.get()) text_widget.delete('1.0', END) # clear all those previous text first s.set(match) text_widget.insert(INSERT, s.get()) # display team match data in text widget except: print("An exception occurred") action_vid = tk.Button(buttonFrame, text="Find", command=get_match) # button used to find out the team match data action_vid.pack() win.mainloop() Select the debug tab then start debugging the program. Create a #sport score data application with #python program pic.twitter.com/a8Ln2v31nY — TechLikin (@ChooWhei) March 18, 2019 In the next chapter, we will further modify this sports score application, maybe with other module and API because there are a few shortages in this current module.
https://www.cebuscripts.com/2019/03/18/codingdirectional-create-a-sports-score-application-with-python/
CC-MAIN-2019-18
refinedweb
498
59.9
Listing Four, genData.c, creates a cigar shaped cloud of points around a two-dimensional line as shown in the figure. Figure 2 is the point cloud distribution produced by the compiled genData.c application for a hundred data points with a 0.1 variance. Figure 2: Example of a linear dataset for PCA. The program writes the data in binary format to a user-specified file for use in the data-fitting step. The data can be piped through stdout to another program (say, one running natively on the Intel Xeon Phi coprocessor) by specifying a filename of "-" on the command line. Listing Four // Rob Farber #include <stdio.h> #include <stdint.h> #include <stdlib.h> #include <string.h> // get a uniform random number between -1 and 1 inline float f_rand() { return 2*(rand()/((float)RAND_MAX)) -1.; } void genData(FILE *fn, int nVec, float xVar) { float xMax = 1.1; float xMin = -xMax; float xRange = (xMax - xMin); // write header info uint32_t nInput=2; fwrite(&nInput,sizeof(int32_t), 1, fn); uint32_t nOutput=0; fwrite(&nOutput,sizeof(int32_t), 1, fn); uint32_t nExamples=nVec; fwrite(&nExamples,sizeof(int32_t), 1, fn); for(int i=0; i < nVec; i++) { float t = xRange * f_rand(); float z1 = t + xVar * f_rand(); #ifdef USE_LINEAR float z2 = t + xVar * f_rand(); #else float z2 = t*t*t + xVar * f_rand(); #endif fwrite(&z1, sizeof(float), 1, fn); fwrite(&z2, sizeof(float), 1, fn); } } int main(int argc, char *argv[]) { if(argc < 4) { fprintf(stderr,"Use: filename nVec variance seed\n"); exit(1); } char *filename=argv[1]; FILE *fn=stdout; if(strcmp("-", filename) != 0) fn=fopen(filename,"w"); if(!fn) { fprintf(stderr,"Cannot open %s\n",filename); exit(1); } int nVec = atoi(argv[2]); float variance = atof(argv[3]); srand(atoi(argv[4])); genData(fn, nVec, variance); if(fn != stdout) fclose(fn); return 0; } Building and Running the PCA Analysis Once the nlopt library has been built, the train.c and pred.c source files from this article should be copied to disk. In addition, the Python script, genFunc.py from the previous article needs to be copied to this directory. Create and change to a subdirectory and call it pca. Copy the genData.c and myFunc.h files to this subdirectory. Now use the genFunc.py script to create a 2x10x1x10x2 autoencoder: python ../genFunc.py > fcn.h The following bash commands will build the executable files. The nlopt default installation directory $HOME/install was used to access the nlopt include and library files. APP=pca FLAGS="-DUSE_LINEAR -std=c99 -O3 -openmp -fgnu89-inline " make up the BUILD script, which will create the following applications: gen_pca: Generates the PCA data set. train_pca.mic: The native mode training application. train_pca.off: The offload mode training application. train_pca.omp: A training application that will run in parallel on the host processor cores. pred_pca: The sequential prediction program that will run on the host. Fitting a PCA Autoencoder Using Offload Mode The following RUN_OFFLOAD script will generate a PCA data set of 30,000,000 observations with a variance of 0.1 that the offload mode train_pca.off executable will fit. A 1000-point prediction set with zero variance will be used for prediction purposes. The UNIX tail command strips off some informative messages at the beginning of the prediction results save in the file plot.txt to make it easy to graph the final result. The original results are kept in the output.txt file. APP=p Results in: $ sh RUN_OFFLOAD myFunc generated_PCA_func LINEAR() nExamples 30000000 Number Parameters 83 Optimization Time 94.0847 found minimum 28.28534843 ret 1 number OMP threads 240 DataLoadTime 3.02742 AveObjTime 0.00425339, countObjFunc 22108, totalObjTime 94.0339 Estimated flops in myFunc 128, estimated average GFlop/s 902.81 Estimated maximum GFlop/s 942.816, minimum GFLop/s 13.2578 The offload mode training averaged 902 gigaflops in part because there was at least one call to the objective function that ran at only 13 gigaflops per second. Use of the timing framework in Getting to 1 teraflop on the Phi shows that the first few calls to the Phi can be slow relative to the performance shown in the remaining calls. The gnuplot application is used to generate a scatterplot of the predicted data using the fitted parameters: gnuplot -e "unset key; set term png; set output \"pca_pred.png\"; \ plot \"plotdata.txt\" u 5:6" Comparison of resulting graph (Figure 3) shows that the optimized autoencoder did find a reasonable looking fit to the data shown in Figure 2. Figure 3: Offload mode PCA line prediction. VTune Performance Analysis Symbol table information can be added to the offload compilation command with the –g option. This option enables runtime behavior of the Phi coprocessor to be examined with the Intel VTune performance analyzer. The following commands were used to compile train_pca.off for VTune analysis. Note the addition of the -g option to the FLAGS variable: APP=pca FLAGS="-g -DUSE_LINEAR -std=gnu99 -O3 -openmp" INC=$HOME/install/include LIB=$HOME/install/lib icc $FLAGS ../train.c -I . -I $INC -L $LIB -lnlopt -lm -o train_$APP.off The basic information for utilizing VTune can be found in the Intel VTune Amplifier XE 2013 Start Here document. Once configured, starting amplxe-gui can perform the "Knights Corner Platform Lightwight Hotspots Analysis." Set "automatically stop application" to 60 seconds. Click start and the summary screen shown in Figure 4 will appear after the application runs for one minute. Figure 4: CPI and Top Hotspots. The summary shows that most of the runtime is spent in myFunc(). Reviewing the Intel document "Intel VTune Performance Analyzer Basics: WhatIis CPI and How Do I UseIt?" makes it reasonable to suspect that the warning about the high CPI is the result heavy use of the per-core wide-vector units on the the KNC chip. Some of the time is spent in the Intel Xeon Phi coprocessor threads and on the CPU. In a tightly coupled calculation like a reduction, the slowest thread is the one that controls the overall runtime. Looking at the source level view in the Figure 5, we see that the dot products consume a significant amount of the runtime (see the left side). The VTune analyzer very nicely highlights the assembly language instructions in the Assembly view (shown on the right) associated with the C source line on the left. The highlighted assembly instructions show that the dot-product is composed of a data move and a wide-vector fused multiply-addition instruction, VFMADD213PS. The "Intel Xeon PhiTM Coprocessor Instruction Set Architecture Reference Manual" states: "VFMADD213PS - Multiply First Source By Destination and Add Second Source Float32 Vectors Performs an element-by-element multiplication between float32 vector zmm2 and float32 vector zmm1 and then adds the result to the loat32 vector result of the swizzle/broadcast/conversion process on memory or vector loat32 zmm3. The final sum is written into float32 vector zmm1." Figure 5: VTune Source and Assembly views. From this brief analysis using VTune, we have a fair degree of confidence that the most of the computation time is spent in myFunc() performing efficient fused multiply-add wide-vector instructions.
http://www.drdobbs.com/cpp/fast-ip-routing-with-lc-tries/cpp/numerical-and-computational-optimization/240151128?pgno=3
CC-MAIN-2015-32
refinedweb
1,191
57.67
You could consider this another in the Small Programming Enhancement (SPE) series. You’ll probably also notice I’ve been doing quite a lot of REXX programming recently. Anyway, here’s a tip for refactoring code I like. Suppose you have a line of code: norm_squared=(i-j)*(i-j) that you want to turn into a function. No biggie: norm2: procedure parse arg x,y return (x-y)*(x-y) and call it with: norm_squared=norm2(i. Try the following, though: /* REXX */ do i=1 to 10 do j=1 to 10 say i j norm2(i,j) /* do never */ if 0 then do norm2: procedure parse arg x,y return (x-y)*(x-y) /* end do never */ end say "After procedure definition" end end exit did do this but it immediately failed once I tried it inside a loop: You learn and move on.) What this enables you to do is to develop the function “inline” and then you can move it later - to another candidate invocation or indeed to the end of the member (or even to a separate member). It saves a lot of scrolling about and encourages refactoring into separate routines. It’s not the same as an anonymous function but it’s heading in that direction, in terms of utility..
https://www.ibm.com/developerworks/community/blogs/MartinPacker/entry/refactoring_rexx_temporarily_inlined_functions?lang=zh
CC-MAIN-2018-17
refinedweb
216
58.21
Chapter 13. Inputs and Outputs So far in this book, the examples have assumed that the inputs to the query are individual XML documents that are accessed through the doc function. This chapter goes into further detail about the various options for accessing input documents in XQuery. It also describes output documents, including the many different options for serializing query results to files. Types of Input and Output Documents While XQuery is typically associated with querying and returning XML, it actually can also work with text files and JSON documents. - XML XML documents are by far the most common input to XQuery. Technically, the input might not be an entire XML document; it might be a document fragment, such as an element or sequence of elements, possibly with children. It might not be a physical XML file at all; it might be data retrieved from an XML database, or an in-memory XML representation that was generated from non-XML data. If the input document is physically stored in XML syntax, it must be well-formed XML. This means that it must comply with XML syntax rules, such as that every start tag has an end tag, there is no overlap among elements, and special characters are used appropriately. It must also use namespaces appropriately. This means that if colons are used in element or attribute names, the part before the colon must be a prefix that is bound to a namespace using a namespace declaration. Whether it is physically stored as an XML document or not, an input document ... Get XQuery, 2nd Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/xquery-2nd-edition/9781491915080/ch13.html
CC-MAIN-2021-04
refinedweb
286
51.07
I have the following functions: def isAllDigits(x: String) = x forall Character.isDigit def filterNum(x: (Int, String)) : Boolean = { accumNum.add(1) if(isAllDigits(x._2)) false else true } I am passing in key/value's and I want to check that the values are numeric. For some reason it is filtering out : res10: Array[(Int, String)] = Array((1,18964), (2,39612), (3,1), (4,""), (5,""), (6,""), (7,""), (8,""), (9,1), (10,"")) but allowing this: res9: Array[(Int, String)] = Array((18,1000.0), (22,23.99), (18,1001.0), (22,23.99), (18,300.0), (22,23.99), (18,300.0), (22,23.99), (18,300.0), (22,23.99)) Does .isDigit only allow doubles? But I am confused as to why when x is (Int,String) the double/int being passed in is being seen as a string. Edit: I am using this function in Spark with the following: val numFilterRDD = numRDD.filter(filterNum) numRDD.take() example: res11: Array[(Int, String)] = Array((1,18964), (2,39612), (3,1), (4,""), (5,""), (6,""), (7,""), (8,""), (9,1), (10,""), (11,""), (16,""), (18,1000.0), (19,""), (20,""), (21,""), (22,23.99), (23,""), (24,""), (25,"")) The problem is that you are running through each character separately. So, in the case of a double, it gets to the point that the decimal is checked and that by itself is not a number: Character.isDigit('.') //false You might be better to use a regex. x matches """^\d+(\.?\d+)$"""
http://www.dlxedu.com/askdetail/3/c11c867ed5a22cd68d60be7884092b15.html
CC-MAIN-2019-13
refinedweb
242
73.78
Web components: from zero to hero, part three Web Components hero with 💥LitElement💥 - [x] Recap - [ ] Properties and attributes - [ ] Lifecycle and rerendering - [ ] Conclusion Lit-html and LitElement finally got their official (1.0 and 2.0 respectively) releases, and that makes it a great time to wrap up the Web Components: from Zero to Hero blog series. I hope you've found these blogs useful as you've read them; they've been a blast to write, and I very much appreciate all the feedback and response I've gotten! Huh? A 2.0 release? Lit-element has moved away from the @polymer/lit-elementnamespace, to simply: lit-element. The lit-elementnpm package was previously owned by someone else and had already had a release, hence the 2.0 release. Let's get to it! In the last blog post we learned how to implement lit-html to take care of templating for our web component. Let's quickly recap the distinction between lit-html and lit-element: - Lit-html is a render library. It provides the what and the how. - LitElement is a web component base class. It provides the when and the where. I also want to stress that LitElement is not a framework. It is simply a base class that extends HTMLElement. We can look at LitElement as an enhancement of the standard HTMLElement class, that will take care of our properties and attributes management, as well as a more refined rendering pipeline for us. Lets take a quick look at our to-do-item component, rewritten with LitElement. You can find the full demo here, and on the github page: import { LitElement, html, css } from ''; class TodoItem extends LitElement { static get properties() { return { text: { type: String, reflect: true }, checked: { type: Boolean, reflect: true }, index: { type: Number } } } constructor() { super(); // set some default values this.text = ''; this.checked = false; } _fire(eventType) { this.dispatchEvent(new CustomEvent(eventType, { detail: this.index })); } static get styles() { return css` :host { display: block; font-family: sans-serif; } .completed { text-decoration: line-through; } button { cursor: pointer; border: none; } `; } render() { return html` <li class="item"> <input type="checkbox" ?checked=${this.checked} @change=${() => this._fire('onToggle')}> </input> <label class=${this.checked ? 'completed' : ''}>${this.text}</label> <button @click=${() => this._fire('onRemove')}>❌</button> </li> `; } } 💅 Properties and attributes - [x] Recap - [x] Properties and attributes - [ ] Lifecycle and rerendering - [ ] Conclusion Let's get straight into it. The first thing you might notice is that all of our setters and getters are gone, and have been replaced with LitElement's static properties getter. This is great, because we've abstracted away a lot of boiler plate code and instead let LitElement take care of it. So lets see how this works: static get properties() { return { text: { type: String, reflect: true }, checked: { type: Boolean, reflect: true }, index: { type: Number } } } We can use the static properties getter to declare any attributes and properties we might need, and even pass some options to them. In this code, we've set a text, checked, and index property, and we'll reflect the text and checked properties to attributes as well. Just like that. Remember how much work that was before? We had a whole chapter dedicated to reflecting properties to attributes! We can even specify how we want attributes to be reflected: static get properties() { return { text: { type: String, reflect: true, attribute: 'todo' } } } Will reflect the text property in our DOM as the following attribute: <to-do-item</to-do-item> Are you still confused about how reflecting properties to attributes works? Consider re-visiting part one of this blog series to catch up. Additionally, and perhaps most importantly, the static properties getter will react to changes and trigger a rerender when a property has changed. We no longer have to call render functions manually to update, we just need to update a property, and LitElement will do all the work for us. ✨ Hey! Listen! You can still use custom getters and setters, but you'll have to manually call this.requestUpdate()to trigger a rerender. Custom getters and setters can be useful for computed properties. ♻️ Lifecycle and rerendering - [x] Recap - [x] Properties and attributes - [x] Lifecycle and rerendering - [ ] Conclusion Finally, let's take a look at our to-do-app component: import { LitElement, html } from 'lit-element'; import { repeat } from 'lit-html/directives/repeat'; import './to-do-item.js'; class TodoApp extends LitElement { static get properties() { return { todos: { type: Array } } } constructor() { super(); this.todos = []; } firstUpdated() { = ''; } } static get styles() { return css` :host { display: block; font-family: sans-serif; text-align: center; } button { border: none; cursor: pointer; } ul { list-style: none; padding: 0; } `; } render() { return html` <h1>To do</h1> <form id="todo-input"> <input type="text" placeholder="Add a new to do"></input> <button @click=${this._addTodo}>✅</button> </form> <ul id="todos"> ${repeat(this.todos, (todo) => todo.text, (todo, index) => html` <to-do-item .checked=${todo.checked} .index=${index} .text=${todo.text} @onRemove=${this._removeTodo} @onToggle=${this._toggleTodo}> </to-do-item>` )} </ul> `; } } window.customElements.define('to-do-app', TodoApp); You'll notice that we've changed our functions up a little bit. We did this, because in order for LitElement to pick up changes and trigger a rerender, we need to immutably set arrays or objects. You can still use mutable patterns to change nested object properties or objects in arrays, but you'll have to request a rerender manually by calling this.requestUpdate(), which could look like this: _someFunction(newValue) { this.myObj.value = newValue; this.requestUpdate(); } Which brings us to LitElement's lifecycle. It's important to note that LitElement extends HTMLElement, which means that we'll still have access to the standard lifecycle callbacks like connectedCallback, disconnectedCallback, etc. Additionally, LitElement comes with some lifecycle callbacks of it's own. You can see a full example of LitElement's lifecycle here. shouldUpdate() You can implement shouldUpdate() to control if updating and rendering should occur when property values change or requestUpdate() is called. This can be useful for when you don't want to rerender. firstUpdated() firstUpdated is called when... well, your element has been updated the first time. This method can be useful for querying dom in your component. updated() Called right after your element has been updated and rerendered. You can implement this to perform post-updating tasks via DOM APIs, for example, focusing an element. Setting properties inside this method will not trigger another update. And as I mentioned before, you can still implement connectedCallback() and disconnectedCallback(). Conclusion - [x] Recap - [x] Properties and attributes - [x] Lifecycle - [x] Conclusion If you've made it all this way; congratulations! You are now a web components super hero. I hope this blog series was helpful and informative to you, and that it may function as a reference for when you need to remember something about web components. If you're interested in getting started with Web Components, make sure to check out open-wc. Open-wc provides recommendations including anything betwixt and between: developing, linting, testing, tooling, demoing, publishing and automating, and will help you get started in no time. If you want to stay up to date with the lit-html/LitElement community, I recommend checking out the awesome-lit-html repo, or joining the Polymer slack. Feel free to reach out to me on twitter if you have any questions. Discussion (0)
https://dev.to/thepassle/web-components-from-zero-to-hero-part-three-3c5h
CC-MAIN-2021-39
refinedweb
1,216
55.95
Can we please get a response from the developers, along with either a blog post or a documentation page about how to handle image local assets - images, Javascript includes etc - as there is little to no documentation on how to include a JS library or even a local image. Even the ion-img documentation cops out of providing example src attributes that might have been helpful. So far, adding local images to my src/assets folder works using ionic serve, but will only work when running on a device if the leading slash is left off the path. <!-- This works in a page template on ionic serve & running on device --> <img src="assets/myimage.png"> <!-- This doesn't work in index.html on a device --> <script src="assets/myscript.js"></script> I still haven’t managed to include a JS library into index.html from my assets folder in the same way that worked with an image though - why does this work inconsistently? Some libraries - older ones specifically - while working with things like google maps and any of the utility libraries, simply need to be included, as they add their goodness to the window object directly. And you can’t import them - they’re not built for module loaders. Please please please (pretty please) can we get a response on this from the developers as i’ve seen thread after thread and it’s actually causing me to have to delay the push of one of my apps to a client. Thanks! Dan.
https://forum.ionicframework.com/t/help-asset-paths-for-all-os-types-we-need-more-documentation/103217
CC-MAIN-2022-33
refinedweb
253
58.11
Some table features are lost when a .doc template is rendered. Table borders do not render as they have been layed out in the template. In the attached ERFC 5-2.doc, notice that when it is converted to PDF it is showing vertical lines between some of the cells. There should be 3 separate boxes down the right hand side, but they are connected on the left edge. In the attached ERFC 18.pdf.doc notice that the boxes are arbitrarily shifted to the left and right messing the alignment up. Can you please look at these conversions and let me know what can be done to resolve these? Thank you. Some table features are lost when a .doc template is rendered. Table borders do not render as they have been layed out in the template. Hello Mark, I have tested the issue using latest version of Aspose.Words 5.2.2.0 and hotfix for Aspose.Pdf 3.8.0.4 and in my case the Pdf files are being generated fine. I have attached the hotfix and the resultant files, please take a look. In case of any further issue, feel free to contact. The PDFs that you attached both exhibit the problems that I mentioned. The first shows lines between the left hand side of the middle right hand box and the boxes to the top and bottom of it; which is incorrect. The second shows the 2nd and 3rd boxes indented and different lengths. Everything should be uniform and aligned. Hello Mark, I have tested the issue and I’m able to reproduce the same problem. I am really sorry, that I couldn’t notice the problem in my previous test. <?xml:namespace prefix = o I have logged it in our issue tracking system as PDFNET-6239. We will investigate this issue in detail and will keep you updated on the status of a correction. We apologize for your inconvenience. Hello!<?xml:namespace prefix = o Thank you for considering Aspose. I’m Viktor, the developer on Aspose.Words Team responsible for integration with Aspose.Pdf. My colleagues asked me to inspect the issue with text borders. Currently distance from text is not supported in PDF export. We need cooperation with Aspose.Pdf Team to complete this task. On Aspose.Words side this issue is known as WORDSNET-445. We’ll provide you updates on any progress or when it is fixed. As a workaround you can consider putting some text in tables with one cell each. Tables behave more predictable in PDF export and don’t need to define distances from text. In particular, if you’d like to have three equally aligned blocks you normally create three tables. Please let us know whether this workaround helps you. Regards,Regards, Ok, that explains the table misalignment. What about the extra lines that are being displayed in side the table on the first document? Have you found anything with that issue? Thanks. Hi, The problem in the first document should be resolved in our side(Aspose.Pdf). We have found a solution for it. We will fix it and send you an update ASAP. Best regards.
https://forum.aspose.com/t/word-to-pdf-conversion-not-rendering-tables-correctly/124767
CC-MAIN-2022-40
refinedweb
530
77.13
Hello there, is there a way to convert a char[2] = '1', '2' kinda thing to an int ? >is there a way to convert a char[2] = '1', '2' kinda thing to an int ? Yes. It's easiest to store the characters as a string first though, so that you can take advantage of standard solutions without resorting to something manual. A good solution is using strtol to make the conversion: #include <stdio.h> #include <stdlib.h> int main ( void ) { char buffer[BUFSIZ]; while ( fgets ( buffer, sizeof buffer, stdin ) != NULL ) { char *end; int value; value = (int)strtol ( buffer, &end, 0 ); if ( end != buffer ) printf ( "Value = %d\n", value ); else puts ( "No conversion made!" ); } return 0; } Just for grins, you can also do it manually. Though as you can see, getting it right is a smidge more complicated than using the standard library: #include <stdio.h> #include <string.h> #include <errno.h> const char *parse_int ( const char *s, int *value ); int main ( void ) { char buffer[BUFSIZ]; while ( fgets ( buffer, sizeof buffer, stdin ) != NULL ) { int x; const char *p; buffer[strcspn ( buffer, "\n" )] = '\0'; errno = 0; p = parse_int ( buffer, &x ); if ( errno != 0 ) puts ( "Overflow!" ); if ( *p == '\0' ) puts ( "Successful conversion!" ); else printf ( "Unconverted: \"%s\"\n", p ); if ( p != buffer ) printf ( "Value = %d\n", x ); else puts ( "No value found" ); } return 0; } #include <ctype.h> #include <errno.h> #include <limits.h> const char *parse_int ( const char *s, int *value ) { /* Base of the final converted value */ const unsigned base = 10; /* Largest possible value without the least significant digit */ const unsigned limit = UINT_MAX / base; /* Least significant digit from the largest possible value */ const unsigned top_digit = UINT_MAX % base; unsigned overflow = 0; /* True if integer overflow occurs */ unsigned sign = 0; /* Final sign of the converted value */ unsigned temp = 0; /* The intermediate converted value */ unsigned n = 0; /* Count of converted digits */ /* Save and skip over the sign if present */ if ( *s == '-' || *s == '+' ) sign = *s++ == '-'; /* Build the intermediate value */ for ( ; isdigit ( *s ); s++, n++ ) { unsigned digit = *s - '0'; /* This protects *only* the intermediate value from overflow. Overflow of the final value requires further checks */ overflow = temp > limit || ( temp == limit && digit > top_digit ); if ( overflow ) break; /* Shift-add by the base */ temp = temp * base + digit; } if ( n > 0 ) { /* A conversion was made, so now we need to deal with overflow and set the final value */ if ( overflow || ( sign && temp > -INT_MIN ) || ( !sign && temp > INT_MAX ) ) { /* The intermediate actually overflowed, or converting it to int would overflow. Either way it's an error to the caller */ errno = ERANGE; temp = sign ? -INT_MIN : INT_MAX; } *value = sign ? -(int)temp : (int)temp; } else if ( sign ) { /* We found a sign and skipped over it. But because no conversion was made, we need to "unskip" the sign */ --s; } return s; } Whatever you do, don't use atoi. It's an evil function that doesn't handle errors well. The only safe way to use atoi is to basically parse the string first to make sure that it's a valid integer. If you're going to do that, you might as well convert it manually in the process like parse_int. :icon_rolleyes: >actually there is a lot simple way to do it That's exactly what my parse_int does. The only difference is that I added a lot of necessary checks and extra logic for a generalized solution. A naive approach would look like this: #include <stdio.h> int parse_int ( const char *s ); int main ( void ) { char buffer[BUFSIZ]; while ( fgets ( buffer, sizeof buffer, stdin ) != NULL ) printf ( "Value = %d\n", parse_int ( buffer ) ); return 0; } #include <ctype.h> int parse_int ( const char *s ) { unsigned sign = 0; /* Sign of the converted value */ unsigned result = 0; /* The converted value (no sign) */ /* Save and skip over the sign if present */ if ( *s == '-' || *s == '+' ) sign = *s++ == '-'; /* Build the result value */ while ( isdigit ( *s ) ) result = result * 10 + ( *s++ - '0' ); // Add the sign and force integer range return sign ? -(int)result : (int)result; } That's about as simple as it gets without being blatantly wrong or assuming a certain number of characters in the result. You could simplify it significantly if you took the OP's example and assumed only two digit values: #include <stdio.h> int main ( void ) { char buffer[BUFSIZ]; while ( fgets ( buffer, sizeof buffer, stdin ) != NULL ) { printf ( "Value = %d\n", ( 10 * ( buffer[0] - '0' ) ) + buffer[1] - '0'); } return 0; } >because of the ASCII character codes No. C doesn't require characters to have ASCII values. What C does require is that the values of the decimal digits have to be consecutive. That's why subtracting '0' works for the characters '0' through '9', regardless of the character set being used.. ...
https://www.daniweb.com/programming/software-development/threads/103482/converting-a-char-to-an-int
CC-MAIN-2017-26
refinedweb
766
62.88
> Anyway, I hadn't looked at the latest 5.2 when I wrote that. If > luaL_cpcall acts like lua_cpcall but with return values, then it would > be marginally easier, but you'd still have that extra C function, > which is a large part of the pain of that solution. Well, that is the general philosophy of Lua. Usually it does not provide functions that you can write yourself with a few lines of code, unless it is frequently needed. Were Lua to provide a protected pushstring, it should provide protected versions of many other functions, such as newtable, newuserdata, settable, etc. Most functions in the API can throw a memory error, and I do not see why pushstring deserves any special treatment. > I don't see luaL_cpcall per se in the manual or source of 5.2-work2, > though, so I can't really tell the details; perhaps I'm missing > something. This function was not in the work2 manual. Its entry is like this: int luaL_cpcall (lua_State *L, lua_CFunction f, int nargs, int nresults); Calls a C function in protected mode. This function is equivalent to lua_pcall, but it calls the C function f (instead of a Lua function in the stack) and it does not have an error handler function. So, I guess it will boil down to this: int pushaux (lua_State *L) { lua_pushstring(L, (char *)lua_touserdata(L, 1)); return 1; } const char *ppushstring (lua_State *L, const char *s) { lua_pushlightuserdata(L, s); return (luaL_cpcall(L, pushaux, 1, 1) == LUA_OK) ? lua_tostring(L, -1) : NULL; } (This last function could be a macro.) -- Roberto
http://lua-users.org/lists/lua-l/2010-03/msg00935.html
CC-MAIN-2019-35
refinedweb
263
61.77
Exclude. “Great Jesse, no one cares about your vendetta, so how do we fix it?” I have no clue, but the server architect I work with, John Howard, had a good idea. We’ll probably implement it in the next 2 weeks after we hit some interim milestones. I’ll explain his idea at the bottom, but first a review. If you want to skip the review, go to the bottom. Why Exclude Classes? When creating large Flash websites or applications, you’ll tend to have many SWF’s. This was especially true with AS2 driven not just from a media perspective, but also from a performance one. If you go to a visual website, say Flickr, and spend 20 minutes on it, you’ll probably download hundreds of images. Rather than download 20 minutes worth of images initially, you download on the fly and download whats needed. Flash and Flex apps can be built this way; it’s called excluding classes in Flash, and modules in Flex. In Flash, the main way to download assets over time is to load external SWF’s into another SWF. You use the loadMovie/loadMovieNum functions or the MovieClipLoader class, or the Loader component to do this pre-AS3. Although you can load images and video externally from a SWF just like a webpage can, the SWF format itself has visual advantages, such as images that can have alpha channels, yet also have JPEG compression applied to those images. JPEG does not have alpha channels, GIF only has a 1 bit alpha, and PNG never got the glorious browser support we all desired and only has lossless compression. When creating visual layouts that have a lot of media such as a multitude of images, audio, and/or video meshed with animations, this can be a stand alone SWF. Typically those types of things would be put into their own symbols such as a Graphic or MovieClip. However, if they got large enough, you should make it an external SWF for 3 reasons. The first is because it’s faster for you to make changes to the SWF. When you do a test movie, aka compile a SWF, Flash has to render animations, compress your bitmaps and sounds, as well as integrate your video if any. This can take a lot of time. Sometimes that time isn’t actually that bad, but because of the frequency in which you are re-compiling your SWF’s to check for visual accuracy, and doing this say every 2 minutes, and you end up saving 2 seconds in your compile… well, hey, that’s almost 10 minutes of your day you just saved right there! This is more apparent for larger media, however, like mp3 compressing a bunch of audio, or integrated video that can make your FLA’s take a minute or more to compile. The second is you can get help, aka RAD development. Instead of sharing the same FLA, you can have a project broken up into multiple FLA’s that another designer/developer can use so you both can work together on the same project. The third is that you only make the user download what they need to download. If you have a site that has 6 sections, you only need to make the user download the first section. You can either load the other section when the user clicks to go there, or download in the background once the first is done. The work-flow for visual projects in Flash is dead easy. There are a lot of frameworks, such as Gaia, and methodologies out there for facilitating this kind of work. For Flex, you have to be a rocket scientist. Flex 3 has some help in this department, but they are a long way off in how easy this is in Flash. For Flash applications, it’s a little different. In the beginning, our first enhancement to this was Remote Shared Libraries. What this allowed you to do was have a “reference MovieClip”, or any other symbol for that matter, in your library. Your symbol would have the SWF that it is stored in stored with the symbol as metadata. You’d go make that Symbol in another FLA, and upload those SWF’s together to your website. When your SWF was played, if a remotely shared symbol appeared on the time line, the Flash Player would automatically go download this for you. The bad was you really didn’t have much in the way of code in control or getting events about this. However, that ended up saving you a lot of work since it did it for you. Either way, this worked out pretty good because you could treat the symbols as normal symbols; the FLA would actually keep a local copy. You could update the symbol from the Library in case it changed in another FLA. You could additionally ensure every time you compiled it updated for you, but this was the slowest thing on the planet. In the end, Remote Shared Libraries were a management nightmare in Flash. For simple projects where you had a master SWF, they worked damn good. For anything more complicated, it was what I call “Flash Jenga” (Jenga is a game consisting of a tower of stacked wooden blacks the size of your pinky. The challenge was to remove bottom blocks without making the whole thing topple). If you got it to work, you were the shit, but it was very easy to break, and once it did, things came crashing down. The tools never really helped you manage your Remote Shared Libraries. Flex Builder has some nice improvements in this realm with regards to tool support. The biggest problem, however, with Remote Shared Libraries in regards to application development and code. Like I said, they still work good for assets; anything that isn’t ActionScript classes. Since ActionScript 2 and 3 utilize a lot of classes, and thus have dependencies, this makes things challenging when creating your applications for low initial download impact in mind. I’ll rehash my version here, but reading these 3 blog posts by one of the the mxmlc compiler creators, Roger Gonzalez formerly of Adobe, is probably a little more authoritative and better written. (Multi-SWF Applications, Modular Applications (part 1), Modular Applications (part 2)). It’s hard enough to architect an application, especially when accounting for a designers needs for dynamic Views. It’s another when you visualize a package of classes, and then have to account on how to reduce dependencies so you only load some classes in some SWF’s, and others in another SWF, but be sure not to use in a master SWF, thus reducing the overall file size. This can lead to code duplication, or a non-efficient use of a RSL, or too much in an RSL that’s not used by 80% of the application. As if software development wasn’t hard enough, you have a designer who keeps changing your MovieClip instance names. You could just say fuggit, and just load a bunch of SWF’s as users navigate to those sections. However, your wasting bandwidth by using duplicated code. For example, in the image below, 3 forms are loaded into a master SWF on the fly whenever a user navigates to that section. Each other has it’s own unique set of components; however, each also has it’s own copy of Button, or mx.controls.Button to be exact in AS2. Say that your Button is 14k. You’ve now duplicated that 14k across 3 SWF’s. …however, why not just load that class once, and have all other SWF’s use that same already loaded class? You can! First, a review on how AS1/AS2 classes work. In the old AVM, aka, the one that runs AS2 and A1 (regardless of player version), classes are defined on _global. The _global keyword is like this object that never dies, and is accessible via the _global keyword in ActionScript. It’s also accessible across SWF’s and across _levels (_level is like a loaded SWF basically). This means that SWF’s loaded in can access classes on _global as well as defining their own. They cannot, however, overwrite classes. In AS3, this behavior is controlled via ApplicationDomain; in AS2, it is what it is. Since you know you’ll be loading those 3 SWF’s into a SWF that will already have the Button class defined, you can then just not compile those 3 SWF’s with the Button class. The user then only has to download 14k once; any other SWF that uses the Button class can use that already downloaded class. In the example below, the Button class has been moved to the shell.swf, increasing it’s size from 8k to 22k. However, notice that that same 14k is now removed from the login.swf, the amount.swf, and settings.swf. In this particular example, you’ve ended up saving 24k! Now, imagine a large website or large application consisting of dozens of SWF’s that could potentially use the same class. You can see how this feature saves a ton of bandwidth. How Exclude.xml Works Exclude.xml works like dis. If you have a FLA called “my.fla”, you create an XML file in the same directory next to the FLA called “my_exclude.xml”. When you compile your FLA, Flash will look for that exclude.xml file. If it finds it, it’ll read it. Any class written in that XML file will NOT be compiled into your SWF. The con to this is the SWF no longer works on it’s own; you have to load it into another SWF to test. The plus is that you’ve now saved file-size since the SWF will get the class it needs to work from another main.swf. Most importantly, your work flow has changed; just your SWF. In the past, I wrote some JSFL to automate this. One JSFL script would delete the exclude.xml file, and then do a test movie. You map this JSFL to Control + Enter so your Test Movie works as normal. I then created another JSFL script to create the exclude.xml based off of a set of classes I knew would be shared throughout the site, and then test your movie. I then mapped this JSFL to Control + Alt + Enter, replacing test scene. That way, you can create SWF’s ready for deployment easily. Now that Flash CS3 has E4X, writing such scripts is a ton easier, and gives ANT in Eclipse a run for it’s money in automation. Course one could use In Flex using mxmlc, this part is actually automated for you; the XML you need is generated via the mxmlc -link-report parameter. You immediately feed that generated XML file to -load-externs, and booya, you have a SWF with those classes removed. So, pretty pimp, huh? Guess what; it doesn’t work in Flash CS3 for AS3. I know, right? “Yeah hi, I’ll take some AS3 with a side of lame sauce.” Using mxmlc Doesn’t Work I tried using mxmlc with -incremental set to true, making sure it doesn’t actually recompile the entire SWF, but just those classes that have changed. If you make sure your FLA has a Document class, this’ll work; mxmlc can use that as the root class. However, in recompiling your classes, it basically ignores the symbols your FLA has created, thus “I’ll take it from here” attitude. Um…mxmlc, dude, f-off, at least we can use pixel fonts… You could do this in MTASC for AS2, I guess their bytecode replacement techniques are just different. Either that, or I’m maybe missing a mixture of mxmlc compiler arguments… :: crosses fingers hoping someone says so in comments :: Bridge Pattern to the Rescue (sort of) So, my boss suggested a proxy technique. Basically, we use the J2EE blah blah blah official blah blah Bridge Pattern. The Bridge Pattern in a nutshell is 2 classes that implement the same interface; one provides the pretty face, whilst the other does all the work. The pretty face is called the Abstraction and the one who does the work is called the Implementation. If you were going to create a Button component using the Bridge Pattern, you would do it in 3 steps. - Create the IButton interface - Create the Button class that implements IButton - Create the ButtonImpl class that implements IButton, and does all the work. Flex 2 actually uses this in a few places, specifically the ToolTipManager class. ToolTipManager merely proxies everything to ToolTipManagerImpl. He instantiates an Implementation class (ToolTipManagerImpl) via a magic string to ensure there are no dependencies, meaning the compiler doesn’t know to compile it in. To ensure it is compiled in, however, they do do (gross) a dependency variable above that. This is how the Bridge pattern will work to save file size. The child SWF’s will only contain the interfaces and abstractions; the implementations will be in the main SWF. First thing to do is create the interface, mainly for typing purposes. Now that that’s done, I do the same thing the ToolTipManager does; I instantiate the ButtonImpl class inside of Button via a magic string. This ensures that Flash doesn’t see the usage of the class. If it did that, Flash would compile it in. A magic string, however, cloaks it from the compiler. Using flash.utils.getDefinitionByName, I can instantiate the ButtonImpl class with my magic string: var btnImpl:Object = getDefinitionByName("com.multicast.controls.ButtonImpl"); impl = new btnIpml() as IButton; Now, I can just create my sets of components as normal: var btn:Button = new Button(); addChild(btn); However, the ButtonImpl class won’t be compiled into the child SWF’s. Instead, he’ll be compiled into the main shell.swf. Based on some initial tests, I reckon interface will run about 500 bytes to 2k, and the Button class from 2k to 4k. The Impl will be around 8k. So, being conservative and rounding up, all of my child SWF’s will no longer have 14k in them, but rather 6k. Like exclude, the developer whether using Flex Builder with an ActionScript project, or Flash CS3, can code normally with AS3’s strong-typing, and/or Flash’s time line and symbols. The down-side is I’m still duplicating anywhere from 2k to 6k in any additional child SWF that uses Button. It’s not perfect, but it’ll do. Unknowns What I haven’t figured out yet is the implementation details. Premature optimization is the root of all evil so I hear, but seriously, how much speed does keeping the interface for strong-typing really give me? What if I just remove the interface, and cast impl in the example above as a DisplayObject, or more accurately, a MovieClip? I feel the Interface is for Java people who like a bunch of classes; to me, 2 is too many, let alone 2 and an interface. I’m only doing this because a feature that worked in 2 previous Flash IDE’s now doesn’t; don’t punish me by making me write more classes to compensate for a feature that removed classes in the first place. Convention works for the Ruby guys; why not me? I don’t really want to spend much time on benchmarks, but any dynamic look up will cause the old AVM to be used, and since this component set is created with AS3, it kind of defeats the purpose of using AS3 if you aren’t going to take advantage of the runtime speed. Secondly, what is impl, a DisplayObject, or rather a class that uses a DisplayObject via Composition? Traditional programmers who came to Flash in the past would use Composition all the time because the thought of extending a GUI base class freaked them out. So, instead of extending MovieClip or Sprite, they’d instead store a reference to the MovieClip or Sprite passed in the constructor. All methods would then act on that reference, as opposed to the class itself via Inheritance. Would doing Composition be better? Although it may have a tad more code, it’d be redundant, so hopefully zlib SWF compression would see that and I’d still hit my small file size goal. Furthermore, I wouldn’t end up with twice as many DisplayObjects. In AS2 that was bad, but in AS3… not so much. If all Buttons were just a simple shell with the “real” Button inside it as the impl, is that really that bad? My gut says make the Implementation class via Composition to save on the number of DisplayObjects. Basically, rather than doing: // Button.as private var impl:IButton; public function Button():void { var btnImplClass:Object = getClassByDefinition("com.multicast.controls.ButtonImpl"); impl = new btnImplClass() as IButton; addChild(impl); } public function setSize(w:Number, h:Number):void { impl.setSize(w, h); } // ButtonImpl.as public function setSize(w:Number, h:Number):void { this.width = w; this.height = h; } I’d instead do: // Button.as private var impl:IButton; public function Button():void { var btnImplClass:Object = getClassByDefinition("com.multicast.controls.ButtonImpl"); impl = new btnImplClass(this) as IButton; } public function setSize(w:Number, h:Number):void { impl.setSize(w, h); } // ButtonImpl.as private var btn:IButton; public function ButtonImpl(ref:IButton):void { btn = ref; } public function setSize(w:Number, h:Number):void { btn.width = w; btn.height = h; } Any other ideas (or corrections), lemme know. Wow, great summary and commentary. I figured the PopUp manager implementation class was just in there for pattern/organizational purposes but I can see how this would be a great method of reducing compile times and swf sizes. Your solution looks promising but I do hope that Adobe is coming up with a better solution, even though Flex 3 seems a step in the right direction. I don’t know much about Thermo but based on circumstantial evidence I have a feeling it must fit in the gap between the designer and the developer and I hope provides a means of compiling efficiently for Flash projects. We’ll have to wait till next week to see. Anyway thanks for the tips and keep up the great research. I’ll try one of your patterns if I get a chance over the nest few weeks and get back. - paul Paul Rangel September 26th, 2007 Thanks for the tip! I really can’t believe Adobe “excluded” this previously documented feature from Flash CS3. We’ve developed several VERY large products for a VERY large user base over the past few years using a framework and workflow that depends on exclude files for all the benefits you mentioned. The first thing I noticed when I started migrating our code from AS2 to AS3 was that our run-time loaded modules were MUCH larger. Obviously, the exclude files were being ignored and the chain of imports caused the entire framework and every utility class to be compiled into every swf. I found a couple notes about exclude files in the latest documentation, and one post stating that support for exclude files MIGHT be released in an update to Flash CS3. UNBELIEVABLE! So, I started searching for a solution. I’d come to the conclusion that I could use getDefinition and type everything as Object. Your solution sounds much better. On the other hand, I wonder if anyone at Adobe would like to provide a better solution or promise a patch within the next 6 months. Michael Prescott September 28th, 2007 There’s unfortunately a lot of things missing in flash 9 cs3. Including removing about 7 different necessary components and web service functionality, etc. On top of this, there is nothing really new in flash 9 cs3 that is earth shattering, except for the export to quicktime functionality (which doesnt work very well) and actionscript 3.0. It looks like a very rushed product in my opinion to meet the CS3 release schedule. Where’s all the cool stuff adobe? AK October 12th, 2007 [...] Original post by Flex and Flash Developer - Jesse Warden dot Kizz-ohm [...] No exclude.xml in Flash CS3 for AS3: Solution via Bridge Pattern at Flash Game Script October 29th, 2007 [...] z alternatywnych rozwiązań, zaproponowanym przez Jesse’go Warden’a jest użycia wzorca pomostu i posługiwanie się nowym wynalazkiem dostępnym w as3 - [...] devLog by Sema » Blog Archive » Flash CS3 ( as3 ) exclude.xml workaround December 2nd, 2007 I found another solution that involves the usage of the flex swc compiler ‘compc’ and the flex compiler ‘mxmlc’. You use compc to generate an swc and mxmlc to generate the accompanying library. Flash CS3 will compile intrinsic against that swc!, so the only thing you need to do is load your ‘dll’ before you execute anything else that involves the usage of your library classes. For examples and more information, read more here: Sander January 8th, 2008 [...] commented on my past blog entry about this that there is apparently a way to get it to work. I haven’t read into it, but [...] Flex and Flash Developer - Jesse Warden dot Kizz-ohm » Blog Archive » Loading Random SWFs in a Strongly Typed Way January 8th, 2008 Thanks, This post gives insight into modules and shell. And how to use the swf files efficiently. I found that Flash CS 3 has a component for Flex. Using this we can directly generate .swc files from Flash CS 3. The only problem I am facing currently is loading swc files at runtime. Please if you have any suggestions about using RSL’s i.e. swc at runtime. Thanks Chetan January 18th, 2008 [...] in AS3 where both have gone bye-bye? The magic of namespaces combined with the Bridge pattern as blogged about by Jesse [...] No _global? No exclude.xml? NO PROBLEM! | flash developer | steven sacks January 22nd, 2008 [...] go check out Gaia. If you’re using a ton of loaded SWF’s in your site, go check out my original entry as I now have proof the theory works. If you’re Sanders, GET TO WORK! AS3 Flash Site is about [...] Flex and Flash Developer - Jesse Warden dot Kizz-ohm » Blog Archive » Gaia Arguments, Real World Bridge Pattern, and gaia_internal January 23rd, 2008 [...] need to prevent PointlessThing from being compiled into “example.swf.” We could use the bridge pattern, but then we’d have to create new interfaces and we still wouldn’t be completely rid of [...] ex animo » Blog Archive » Flash CS3 and _exclude.xml March 26th, 2008 We’ve been using a more simple method of creating empy ‘placeholder’ classes that get replaced. check out: tonypee May 8th, 2008 sorry i was meaning to link to: tonypee May 8th, 2008 I have been searching for solution to exclude class using flash 9 IDE, but realised _exclude is removed. I then used flex library projects to create code library and use it in flash, I took the advantage of refering other library projects as library path to do the type checking of clases which I dont want to include. Things works fine till I realised that the file size of exported swc form flex is much larger than flash exported swc. I created one single class without any composition or inheritance and compiled both from flash and from flex libray projects, the flash exported file size is 11k where as flex exported swc size is 20k. Does anybody know what extra goes into flex swc? I plan to use flex as a code compiler to take the advantage of excluding classes. Does anybodu know any other solution? manotosh June 11th, 2008 hi, andar here, i just read your post. i like very much. agree to you, sir. andar909 August 10th, 2008 Just as some insight into why things are the way they are: You have to understand that the Flex compiler and Flash compiler only share the core AS3 compiler, and this is a pretty hairy prospect as it is, given that one version is C++ and one version is Java. The .FLA format is actually some sort of terse serialization of the in-memory structures of the C++ app, so supporting that in Flex is pretty much infeasible. So, the question was that given that, we’d previously (Royale/Flex 1.0) gone down the path of shrugging and giving up on more than bare minimum compatibility. For Flex 2, we decided that although pure 100% source interoperability was still too hard to tackle, the best approach was to interoperate well at the SWC level. Thus, everything in Flex (and to a lesser degree, in Flash Authoring) is oriented around slinging around baskets of definitions in SWC format. I shared many of your concerns you express here, which is why I created -link-report and -exclude and -load-externs and the Modules system (particularly the AS3 API, not the flimsy MXML-friendly facade). Basically, until I did that, there was no solution in either product. I got a lot of grief for adding it to Flex because it highlighted its omission in Flash, but I stand by my decision to at least provide the option somewhere. I’m a bit of an odd duck, though as typical Flex/Flash developers go, though. I just build my own stuff on top of SimpleApplication and use Flash to build pretty assets. If they’re pure media, I embed them. If they have some code attached (which I try to avoid) I link them as SWCs. I make heavy use of modules and interfaces, and other than cross-site issues and some annoyances with embedded fonts, life is grand. Roger Gonzalez March 31st, 2009 Oh, and manotosh, just crack open your SWC and look inside to see what dependencies got dragged in. Incidentally, just because the SWC is big, doesn’t mean that an application compiled from that SWC will also be big. Library projects include everything referenced by every definition in the library. Applications that reference libraries exclude anything that isn’t a transitive dependency. Excluding stuff for which there are actual hard references is dangerous. It is much better to try to sever extra dependencies, most cleanly by using interfaces. Roger Gonzalez March 31st, 2009
http://jessewarden.com/2007/09/no-excludexml-in-flash-cs3-for-as3-solution-via-bridge-pattern.html
crawl-002
refinedweb
4,398
71.75
I recently converted a large Rails 3 application into a gem. Localization is part of the original code base. I modified the *gettext:find* task to look in the gem and in any app that uses the gem to find potential translations. namespace :gettext do def files_to_translate files_in_application + files_in_gem end def files_in_application Dir.glob(glob_pattern) end def files_in_gem Dir.glob(File.join(MyGem.root, glob_pattern)) end def glob_pattern "{app,lib,config,locale}/**/*.{rb,erb,haml,slim,rhtml}" end end Hence, the task looks in the gem root and, as before, *Rails.root* for files to translate. My question: Is there a way to find *.po* files in both places? I am OK with *.mo* files being stored in Rails.root/locales. (The .mo files can then be checked into the repo for the app without affecting the gem repo.) But I would like a way for the *.po* files of the gem to override any *.po* files in the app. Any ideas? Martin on 2014-10-16 23:25
https://www.ruby-forum.com/topic/6071053
CC-MAIN-2018-39
refinedweb
169
71.21
I have the following ftp client: import ftplib def downloadFTP(file_name): ftp = ftplib.FTP()(host, port) local_file = file_name try:("RETR " + file_name ,open(local_file, 'wb').write) except ConnectionRefusedError: return When I download a file from the server that already exists locally, the local copy is overwritten. What I would like to happen instead is that a new copy with a unique name is created instead. I wrote my own function that does this: if os.path.isfile(local_file): x = 1 while os.path.isfile(os.path.splitext(local_file)[0] + " (" + str(x) + ")" + os.path.splitext(local_file)[1]): x += 1 local_file = os.path.isfile(os.path.splitext(local_file)[0] + " (" + str(x) + ")" + os.path.splitext(local_file)[1]) return local_file But I've made assumptions that I don't trust to be true in a production environment, and it seems ridiculous to reinvent the wheel when it's been written so many times before. Is there some cross-platform way to invoke the operating system's file naming procedure? For example in Ubuntu, if I paste the same file multiple times into the same directory, I get the following: test.txt test (copy).txt test (another copy).txt test (3rd copy).txt test (4th copy).txt ... etc Just wondering if there might be a bit of code out there to help out with thisI'll lay this out quickly: Suppose I'm looking at the directory "Pressure on Beacon" I'm trying to develop a server script using python 34 that runs perpetually and responds to client requests on up to 5 separate ports I have a url I want to POST some JSON data to, but I'm having trouble decoding the dataI'm using Postman to test sending data to the webhook, and I've set up a system to capture what the webhook receives I have these settings
https://cmsdk.com/python/python-ftp-client--force-unique-filenames-on-download.html
CC-MAIN-2019-09
refinedweb
306
56.15
How to Define a Zone Marker When creating an extension, it is important to get the right dependencies for a ZoneMarker. Zones are used to be able to correctly load a product in the correct host. If the zone marker is incorrect, either the components in that namespace are not loaded and the extension is not available, or the extension is loaded when it shouldn’t be, and can cause errors in hosts as expected dependencies are not available. ReSharper ships with diagnostics to help create accurate zone markers. Unfortunately, the diagnostics currently only work when all zones are defined in source (i.e. when you’re building the ReSharper solution). When building an extension that references the ReSharper Platform via assembly references (i.e. through the SDK), the diagnostics do not work. An updated version of the diagnostics is being worked on, and will be available as a separate, SDK tools extension shortly. (The diagnostics check the zone of each component used in the project, e.g. type usages in component constructors. If the zone isn’t explicitly or implicitly required by the current zone marker, the type is highlighted, and a quick fix is available to add the required zone to the zone marker.) Without diagnostics, it is very hard to work out what zones should be required. If you wish to use a component in your code, you need to discover which zone definitions it requires, and include those in your zone marker. This is done by finding the ZoneMarker class, and requiring the same zone definitions. This can be done by hand, by finding zone marker classes that live in the same namespace or higher as those used in your using statements, but this is tedious and prone to error. A temporary solution is to use an empty zone marker. This is usually reserved for infrastructure code that has no real dependencies and should always run. However, as calculating the proper requirements is currently prohibitive, it should be used (TEMPORARILY) for extensions.
https://www.jetbrains.com/help/resharper/sdk/Platform/Zones/HowTo.html
CC-MAIN-2020-29
refinedweb
336
53.81
Re: 2 style questions From: Paul Hsieh (qed_at_pobox.com) Date: 12/22/03 - Previous message: Lew Pitcher: "Re: specific type of Hex conversion" - In reply to: Sander: "2 style questions" - Next in thread: Sander: "Re: 2 style questions" - Reply: Sander: "Re: 2 style questions" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] Date: Mon, 22 Dec 2003 07:11:55 GMT In article <bs3j8s$2u5$1@news4.tilbu1.nb.home.nl>, i@bleat.nospam.com says... > 1. > I was having a discussion with somebody and I think it's just religious. > We are developing some software and he was looking through my code. > I use if (pointer) to test a pointer for NULL. He says it must be if > (p!=NULL). They are identical. The issue is not which one you choose, but rather that it matters at all. If either of you have a problem with one method over the other then its that person's problem. And its a serious one -- if a person is unable to see a trivial equivalence in their mind, then its indicative of an improper focus or otherwise very low mental ability. One has religious arguments about religion, not technical equivalents. The only merit of caring is that leaving off the != NULL has fewer characters, and thus occupies less screen space -- but its horizontal screen space which rarely matters. > The arguments are that it's more readable and that if (p) is not portable. "if (p)" is portable, and since it has an equivalent meaning and can't really be mistaken to mean something else, it can't be less readable. > I said the first is a style issue [...] Its a non-issue. They are the same thing. You lose IQ points for every second you waste talking to your codeveloper about it. You don't have more important issues to talk about? > 2. > Then I started looking through his code and what I noticed that he very > often does things like this: > > char *func(char **out) > { > *out = malloc(100); > return malloc(10); > } > > This might also be a style issue, but it's a style where it's very easy to > introduce bugs (and indeed I found a few) > and even more easy to introduce bugs if you don't know the internals of the > function. So, what do you think about this style? For me, the value of encapsulating blocks of code into functions is for the functions to take responsibility for credibly translating input parameters to output parameters. If a function, such as this, is essentially perform two side-effects, it had better be worth while to do so. I would be very suspicious of the code above, because both mallocs are not stored in the same structure. Hence this inherently assumes that some higher level logic is going to tie those two mallocs together which means that "func" is not taking responsibility for tying them together. There are also 3 possible error conditions (each malloc failing, or both of them failing) all of which have to be determined by higher level code. For the same semantics, I would rewrite the above as: int func (char ** out1, char ** out2) { if (out1) *out1 = malloc (100); if (out2) *out2 = malloc (10); return (out1 == NULL || *out1 != NULL) && (out2 == NULL || *out2 != NULL); } In this way its more obvious that there are two outputs (since neither parameter is decorated as "const") and the return value will tell you whether or not there was any failure at all. Also the effect of passing in a NULL pointer will have the effect of deactivating that output rather than simply leading to undefined behaviour. For someone like me, its hard to justify writing a function like the above, however. The reason for having two mallocs *must* be because out1 and out2 are somehow tied together possibly in the same object. Also if these are really "char *"'s why don't you also at least initialize them with '\0's in them? Passing around bare mallocs (i.e., where the contents are indeterminant) is generally kind of useless. -- Paul Hsieh - Previous message: Lew Pitcher: "Re: specific type of Hex conversion" - In reply to: Sander: "2 style questions" - Next in thread: Sander: "Re: 2 style questions" - Reply: Sander: "Re: 2 style questions" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
http://coding.derkeiler.com/Archive/C_CPP/comp.lang.c/2003-12/3172.html
CC-MAIN-2016-36
refinedweb
717
69.62
#include <IpOptionsList.hpp> Inheritance diagram for Ipopt::OptionsList: Each options is identified by a case-insensitive keyword (tag). Its value is stored internally as a string (always lower case), but for convenience set and get methods are provided to obtain Index and Number type values. For each keyword we also keep track of how often the value of an option has been requested by a get method. Definition at line 32 of file IpOptionsList.hpp. Definition at line 142 of file IpOptionsList.hpp. Definition at line 146 of file IpOptionsList.hpp. Copy Constructor. Definition at line 150 of file IpOptionsList.hpp. References options_, and reg_options_. Default destructor. Definition at line 159 of file IpOptionsList.hpp. Overloaded Equals Operator. Definition at line 163 of file IpOptionsList.hpp. References jnlst_, options_, and reg_options_. Method for clearing all previously set options. Definition at line 172 of file IpOptionsList.hpp. Definition at line 179 of file IpOptionsList.hpp. References reg_options_. Definition at line 183 of file IpOptionsList.hpp. Get a string with the list of all options (tag, value, counter). Get a string with the list of all options set by the user (tag, value, use/notused). Here, options with dont_print flag set to true are not printed. Read options from the stream is. Returns false if an error was encountered. auxilliary method for converting sting to all lower-case letters auxilliary method for finding the value for a tag in the options list. This method first looks for the concatenated string prefix+tag (if prefix is not ""), and if this is not found, it looks for tag. The return value is true iff prefix+tag or tag is found. In that case, the corresponding string value is copied into value. tells whether or not we can clobber a particular option. returns true if the option does not already exist, or if the option exists but is set to allow_clobber read the next token from stream is. Returns false, if EOF was reached before a tokens was ecountered. map for storing the options Definition at line 252 of file IpOptionsList.hpp. Referenced by clear(), operator=(), and OptionsList(). list of all the registered options to validate against Definition at line 255 of file IpOptionsList.hpp. Referenced by operator=(), OptionsList(), and SetRegisteredOptions(). Journalist for writing error messages, etc. Definition at line 258 of file IpOptionsList.hpp. Referenced by operator=(), and SetJournalist(). auxilliary string set by lowercase method Definition at line 284 of file IpOptionsList.hpp.
http://www.coin-or.org/Doxygen/CoinAll/class_ipopt_1_1_options_list.html
crawl-003
refinedweb
409
52.05
We live in great times. There was once a day when the only way to determine if the cat feeder needed more kibble was to actually look at it with your own eyes like some Neanderthal. Thankfully technology and the magical world of IoT has changed all of that. ---- For this project we use a WiFi enable ESP8266 and a STMicro VL53L0X ToF Sensor mounted to a breakout board. If you don't find a need to measure feline food consumption the project still provides Arduino code that can be extremely useful for your other IoT projects: - Logging data to a Google Drive Spreadsheet (via IFTTT) - Logging data to AT&T's M2X machine to machine servers (think nice graphs) - Sending SMSs to your mobile device (via IFTTT) ---- What's happening? The ESP8266 runs in Arduino mode (source code below) in an endless loop. Every hour it polls the VL53L0X ToF Sensor mounted on the lid of the cat food feeder. Since we know how many centimeters the food is from the ToF sensor at full and at empty we are able to scale those values and report/log "percent full" status. Those values are posted to a Google Drive Spreadsheet and to AT&T's M2X machine to machine servers for logging. If the food level is considered CRITICALLY LOW a SMS message goes out to our mobile device. Just to increase the geek factor, CRITICALLY LOW alerts are also displayed on our Pebble watch. ---- The Google Drive Spreadsheet looks like this: Here is the bad ass AT&T M2X machine to machine server graph (note the increase after we filled the feeder): The IFTTT SMS alerts are sent if the rig determines food levels critically low: So..... How's it done? You will need these: - WiFi enable ESP8266 - STMicro VL53L0X ToF Sensor mounted to a breakout board - Some wire and a breadboard. Mount it to the cat feeder and you end up with this: Now all that is left is to copy/paste the code below into the Arduino IDE. Upload it to the ESP8266 and your cats will never go hungry again! ----- /* * AUG2017 * STMicro VL53L0X ToF Sensor for * Cat Food Level Monitoring * WhiskeyTangoHotel.Com * * Logs % full values to: * Google Drive (as an appending spreadsheet) * AT&T M2X for historical graphing * Send SMS to cell phone if level condition is RED/CRITICAL * * uC setting for Ardunio IDE * NoderMCU 1.0 (ESP-12E Module), 80MHz, 921600, 4M (3M SPIFFS) * */ // For the STMicro VL53L0X ToF Sensor // I2C SDA to ESP8266 Pin D2. SCL to ESP8266 Pin D1 #include "Adafruit_VL53L0X.h" // Thanks again ADAFRUIT!!! Adafruit_VL53L0X lox = Adafruit_VL53L0X(); // For the Wireless #include <ESP8266WiFi.h> #include <WiFiClient.h> #include <ESP8266WebServer.h> #include <ESP8266mDNS.h> // WiFi Connection Information const char* ssid = "YourNetworkHere"; // PRIVATE: Enter your personal setup information. const char* password = "YourNetworkPasswordHere"; // PRIVATE: Enter your personal setup information. ESP8266WebServer server(80); // IFTTT Information for WebHook widget String MAKER_SECRET_KEY = "YourIFTTTCodeHere"; // PRIVATE: Enter your personal setup information. Your IFTTT Webhook key here String TRIGGER_NAME_google_drive = "Cat_Food"; // this is the Maker IFTTT trigger name for google drive spreadsheet logging String TRIGGER_NAME_M2X = "cat_food_mx"; // this is the Maker IFTTT trigger name for M2X logging String TRIGGER_NAME_SMS = "CriticalFoodLevel_SMS"; // this is the Maker IFTTT trigger name to send SMS if low is level is CRITICAL. const char* host = "maker.ifttt.com"; String url_google_drive; // url that gets built for the IFTTT Webhook logging to google drive spreadsheet String url_M2X; // url that gets built for the IFTTT Webhook logging to AT&T M2X service String url_SMS; // url that gets built for the IFTTT Webhook sending SMS if food level is critical // Define and set up some variables float Range_inches; // How far the sensor is from the food at time of reading. Sensor is on roof of feeder. float Min_level = 5.0; // Distance in inches before low food alarm level. Food is far from sensor on feeder roof float Max_level = 0.5; // Distance in inches from sensor for full feeder level. Food is close to sensor on feeder roof float Percent_full; // How full in xx.x% is the food based on the Min/Max_levels defined above float Caution_alarm = 35.0; // at xx.x% food level is considered low. String Status YELLOW float Critical_alarm = 25.0; // at xx.x% food level is considered critically low. String Status RED String Status = "***_Starting_with_Caution_at:_" + String(Caution_alarm) + "%_and_CRITICAL_at:_" + String(Critical_alarm) + "%"; // Update to Out of Range, NORMAL, LOW, CRITICAL, etc. "spaces" will error IFTTT Webhook; use "_" int Run_number; // how many times the sensor has been read // Output pins const int led = 2; // Blue on board LED is on PIN2 for this NoderMCU 1.0 ESP8266. Blink it between reads // Program control variables int Seconds_between_posts = 60 * 60; // how often to post the results of the sensor read. NOT EXACT due to post delays, Sensor reads, LED flashing, etc. int logging = 1; // If 1 then log to cloud. Any other value (0) turns it off. ESP8266 "Start/Restart" message is always logged. void setup(void){ // This is run once. pinMode(led, OUTPUT); // set up the onbaord LED pin as an output. Serial.begin(115200); // turn on the serial monitor for debug // wait until serial port opens for native USB devices while (! Serial) { delay(1); } // Is the ToF sensor connecting via I2C? Serial.println("STMicro VL53L0X test"); if (!lox.begin()) { Serial.println(F("Failed to boot VL53L0X!!!")); while(1); } // power Serial.println(F("VL53L0X Passed... \n\n")); // Is the WiFi working? WiFi.begin(ssid, password); Serial.println(""); // Wait for connection while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.print("Trying to connect to "); Serial.print(ssid); Serial.print(" on "); Serial.print(WiFi.localIP()); Serial.println("."); } Serial.print("Connected to "); Serial.println(ssid); Serial.println(WiFi.localIP()); if (MDNS.begin("esp8266")) { Serial.println("MDNS responder started"); Serial.println(" "); } // Use WiFiClient class to create TCP connections for WiFi logging WiFiClient client; const int httpPort = 80; if (!client.connect(host, httpPort)) { Serial.println("connection failed"); // Boo!!! return; } server.begin(); Serial.println("HTTP server started"); // Woo Hoo!!! // Write to Google Sheet via IFTTT Maker channel that the ESP8266 has started/restarted //("Status: " + Status); Serial.println(" "); // This sends the request to the IFTTT server client.print(String("POST ") + url_google_drive + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n"); delay(500); // Delay for web traffic; maybe not required. } void loop(void){ // Loop forever. Read the sensor and post based on the delay values set above. // The blue onboard LED will blink between ToF reads. // Read the ToF. Distance in mm returned. for (int x = 0; x < 10; x++) { // Quick toggle blue on board LED to show measurement being taken. digitalWrite(led, !digitalRead(led)); // toggle state of the on board blue LED. Shows program is running delay(100); } // endfor quick toogle Blue LED VL53L0X_RangingMeasurementData_t measure; Serial.println("-------------------------------------"); Serial.println("Reading a measurement... "); lox.rangingTest(&measure, false); // Convert to inches because the USA, for some reason, doesn't want to adopt the metric system... Range_inches = measure.RangeMilliMeter/25.4; Run_number = Run_number + 1; // Scale and normalize the Min Max levels to 0% to 100%. Clip the range in case of a the Max fill limit was exceeded or a misread. Percent_full = ((Range_inches - Min_level) / (Max_level - Min_level)) * 100.0; if (Percent_full > 100) { Percent_full = 100.00; } if (Percent_full < 0) { Percent_full = 0.0; } //Serial.println(String(Range_inches)); // Debug use // Is the ToF Sensor reading 'anything' for a distance? if (Range_inches > 100 ) { // Something's weird. Ranging error. The ToF sensor is NOT over 100 inches fron the food. EVER!!! Status = "Run:" + String(Run_number) + "__ERROR:_***_Out_of_Range_***"; } else { // ToF made a successful reading so log the food level if (Percent_full >= Caution_alarm) { // above CAUTION LEVEL, All's good Status = "Run:" + String(Run_number) + "___GREEN---GOOD"; } // endif GREEN---GOOD if (Percent_full < Caution_alarm && Percent_full > Critical_alarm) { // CAUTION Zone, YELLOW---REFILL_SOON Status = "Run:" + String(Run_number) + "___~~~YELLOW---REFILL_SOON~~~"; } // end if YELLOW---REFILL_SOON" if (Percent_full <= Critical_alarm) { // CAUTION Zone, RED---REFILL_ASAP" Status = "Run:" + String(Run_number)+ "___!!!_RED---REFILL_ASAP_!!!"; if (logging == 1) { // is logging turned on? Maninly for debug... Typically would be set = 1 // Set up IFTTT Webhook Channel to send the SMS. // Use WiFiClient class to create TCP connections for IFTT SMS WiFiClient client; const int httpPort = 80; if (!client.connect(host, httpPort)) { Serial.println("connection failed"); return; } url_SMS = "" + TRIGGER_NAME_SMS + "/with/key/" + MAKER_SECRET_KEY+ "?value1=" + String(Percent_full); Serial.println("Critical Level: Sending SMS with payload:."); Serial.println(url_SMS); Serial.println(" "); client.print(String("POST ") + url_SMS + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n"); //Serial.println("GOOGLE DRIVE URL:"); //Serial.println(url_google_drive); delay(500); // pause for webservices } } // endif RED---REFILL_ASAP } // Serial print to the monitor for debug Serial.println(String(Range_inches) + " inches down / " + String(Percent_full) + "% full"); Serial.println("Status: " + Status); // Create the request for IFTTT Google Drive and M2X updates url_google_drive = "" + TRIGGER_NAME_google_drive + "/with/key/" + MAKER_SECRET_KEY + "?value1=" + String(Percent_full) + "%" + "&value2=" + Status; url_M2X = "" + TRIGGER_NAME_M2X + "/with/key/" + MAKER_SECRET_KEY + "?value1=" + String(Percent_full); // This sends the request to the IFTTT server. //Serial.println("Requesting URL..."); // debug //Serial.println(url); // debug if (logging == 1) { // is logging turned on? Maninly for debug... Typically would be set = 1 //Serial.println("Logging is ON."); // Set up IFTTT Webhook Channel to update a Google sheet with the activity. // Use WiFiClient class to create TCP connections for IFTT Webhook logging WiFiClient client; const int httpPort = 80; if (!client.connect(host, httpPort)) { Serial.println("connection failed"); return; } client.print(String("POST ") + url_google_drive + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n"); Serial.println(" "); Serial.println("IFTTT url payload to Google Drive:"); Serial.println(url_google_drive); Serial.println(" "); delay(500); // pause for webservices // Set up IFTTT Maker Channel to update a AT&T M2X Server with the activity. // Use WiFiClient class to create TCP connections for IFTT Webhook logging //WiFiClient client; //const int httpPort = 80; if (!client.connect(host, httpPort)) { Serial.println("connection failed"); return; } client.print(String("POST ") + url_M2X + " HTTP/1.1\r\n" + "Host: " + host + "\r\n" + "Connection: close\r\n\r\n"); Serial.println("IFTTT url payload for M2X:"); Serial.println(url_M2X); } else { Serial.println("Logging is OFF."); } // endif/else logging for (int x = 0; x < Seconds_between_posts; x++) { // Delay for next measurement. digitalWrite(led, !digitalRead(led)); // toggle state of the on board blue LED. Shows program is running delay(1000); } // endfor delay for read measurement } ----- Thanks and check out our other stuff!!!
http://www.whiskeytangohotel.com/2017/08/
CC-MAIN-2020-16
refinedweb
1,703
60.21
Created attachment 175987 [details] Patch combining simgear git commits b9deeb and a97c14 A known bug in the release version of the simgear 2016.3.1 can lead to flightgear crashing with an assertion failure when the pilot changes a comm radio frequency or silences the radio. A forum thread describing the issue is at The bug has been fixed in the simgear project's git repo, but until the next release that leaves FreeBSD's port somewhat broken. To reproduce, one has to have flightgear set up so that it starts up with the radios turned on and receiving ATIS information. If you then turn the volume knob to zero or click the "swap frequency" button on the com1 radio, the code exits with the assertion failure. I have applied the changes made in simgear's git commits b9deeb and a97c14, and found that it fixes the problem with flightgear. These commits may be found at and. The attached patch, if dropped into devel/simgear/files will fix the problem until the next release of simgear and flightgear, which won't have the bug. I should note that because the simgear port produces only static libraries, after the patch is applied and the port rebuilt, one also has to rebuild and reinstall flightgear to get the fix. A commit references this bug: Author: martymac Date: Mon Oct 24 10:13:50 UTC 2016 New revision: 424558 URL: Log: Backport radio bug patch from upstream PR: 213650 Submitted by: russo@bogodyn.org Changes: head/devel/simgear/Makefile head/devel/simgear/files/ head/devel/simgear/files/patch-simgear-sound-soundmgr_openal.cxx head/games/flightgear/Makefile Hi, Thanks a lot for your patch, it has been committed with a few changes as the patch provided didn't match exactly the changes brought by b9deeb and a97c14: the return statement should be outside the #ifdef/#endif block. Regards, Ganael. Egad. I saw that the return statement was supposed to be outside the ifdef/endif (it is the entire point of commit a97c14), and then goofed myself when applying the change by hand to produce my patch. I'm sorry it wasn't correct. But thanks for catching that and merging the change. No pb! Thanks to you :)
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213650
CC-MAIN-2018-05
refinedweb
373
66.88