text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
_ec_atonum2 Name _ec_atonum2 — Convert e string "str" to a signed integral value and store the result in "*nptr" Synopsis #include "util.h" | int **_ec_atonum2** ( | str, | | | | nptr, | | | | width, | | | | minval, | | | | maxval, | | | | endptr ); | | const char * <var class="pdparam">str</var>; void * <var class="pdparam">nptr</var>; int <var class="pdparam">width</var>; long long <var class="pdparam">minval</var>; long long <var class="pdparam">maxval</var>; char ** <var class="pdparam">endptr<. Convert e string "str" to a signed integral value and store the result in "*nptr". Recognises decimal, hex and octal values. - str the source string - nptr the destination pointer the number is stored in - width the size in bytes of the integer type to be used - minval the minimum value allowed for str - maxval the maximum value allowed for str - endptr if non-NULL, holds the address of the first non-digit found 0 if "str" was not a valid integer, 1 if it was
https://support.sparkpost.com/momentum/3/3-api/apis-ec-atonum-2
CC-MAIN-2022-05
refinedweb
153
52.94
dependencies { ... testCompile gradleTestKit() testCompile 'org.spockframework:spock-core:1.0-groovy-2.4' } 29 March 2016 With the release of Gradle 2.10, the Gradle Test Kit was included as an "incubating" feature to "[aid] in testing Gradle plugins and build logic generally." Prior to the creation of the Gradle Test Kit, it had been fairly cumbersome to test custom Gradle plugins. Tests often involved using the ProductBuilder to create a dummy instance of a Project and retrieving a declared Task and executing it manually. While this would test the task logic directly, it did not test the execution of the task as part of a normal Gradle execution. Furthermore, it would not exercise task-based caching, making it hard to verify that any configured inputs/outputs are being honored. This is where the Gradle Test Kit can help. It is focused on functional testing, which means that it emulates what a user will see when attempting to run tasks via the command line or Gradle wrapper. Being an "incubating" feature, however, some of the documentation is lacking, especially when it comes to testing a custom Gradle plugin within the project that contains the plugin definition and source. In this post, we will explore how to set up your custom plugin’s project to use the Gradle Test Kit. The first step is to include the Gradle Test Kit as a test scoped dependency in your project’s build.gradle file: dependencies { ... testCompile gradleTestKit() testCompile 'org.spockframework:spock-core:1.0-groovy-2.4' } This will pull in the Gradle Test Kit libraries for use during the test phase of your project. The next step, which is hard to determine from the documentation, is to make sure that custom plugin and its descriptor are on the classpath path when using the Gradle Test Kit. In its current form, there is no easy way to pass/build this classpath as part of a Spock Framework test at runtime. The trick is to follow what is outlined in section 43.2.1 of the Gradle Test Kit documentation, which outlines how to create a text file containing the classpath to be used by the Gradle Test Kit: task createPluginClasspath { def outputDir = file("${buildDir}/resources/test") inputs.files sourceSets.test.runtimeClasspath outputs.dir outputDir doLast { outputDir.mkdirs() file("${outputDir}/plugin-classpath.txt").text = sourceSets.test.runtimeClasspath.join('\n',) } } In the example above, we use the runtime classpath of the test configuration to generate the classpath list to be passed to the Gradle Test Kit runner. The example in the Gradle Test Kit documentation uses the main configuration, which is fine if you don’t need to provide any additional libraries for testing. In my case, I needed to have some other custom plugins available for the functional test, but did not want those dependencies to be on my main compile or runtime classpath. If you don’t want to have to manually call this task each time you test your project, you can add the following your build.gradle script to tie its execution to the test task: test.dependsOn(['createPluginClasspath']) Now that we have a task to generate the plugin classpath text file, we need to use it as part of our test. In the example below, the contents of the plugin-classpath.txt file read, collected, converted into File objects and stored into a list: class MyPluginFunctionalSpec extends Specification { @Rule TemporaryFolder testProjectDir = new TemporaryFolder() File buildFile File propertiesFile List pluginClasspath def setup() { buildFile = testProjectDir.newFile('build.gradle') propertiesFile = testProjectDir.newFile('gradle.properties') pluginClasspath = getClass().classLoader.findResource('plugin-classpath.txt').readLines().collect { new File(it) } } ... The pluginClasspath list will be passed to the Gradle Test Kit runner via the withPluginClasspath method of the builder, which we will see in a bit. Now that we have our classpath sorted out, the next step is to build test(s) to execute your custom plugin and task(s): def "test that when the custom plugin is applied to a project and the customTask is executed, the customTask completes successfully"() { setup: buildFile << ''' plugins { id 'my-custom-plugin } dependencies { compile 'com.google.guava:guava:19.0' compile 'joda-time:joda-time:2.9.2' compile 'org.slf4j:slf4j-api:1.7.13' runtime 'org.slf4j:log4j-over-slf4j:1.7.13' testCompile 'org.spockframework:spock-core:1.0-groovy-2.4' } repositories { mavenLocal() mavenCentral() } ''' when: GradleRunner runner = GradleRunner.create() .withProjectDir(testProjectDir.getRoot()) .withArguments('customTask', '--stacktrace', '--refresh-dependencies') .withPluginClasspath(pluginClasspath) BuildResult result = runner.build() then: result.task(':customTask').getOutcome() == TaskOutcome.SUCCESS } In the example above, notice that we build a full build script, which includes the application of our custom Gradle plugin, and output it to buildFile created in the setup seen previously. This can be anything that you would do in a project’s build.gradle file. You could even store these files in src/test/resources and load and copy the contents of these files from the classpath and write it out to the file to be provided to the Gradle Test Kit runner. In the when block, we see the Gradle Test Kit in action. Here, we set the project directory to the TemporaryFolder that will contain the build.gradle file, the arguments to be passed to Gradle (e.g. the task(s) and switches), and the plugin classpath we generated in the setup. Without the plugin classpath, you will see errors related to Gradle being unable to locate any plugins that match your custom plugin’s ID. Finally, in the then block, we see that we test to make sure the status of the task execution is the one we expected. You can also inspect the output of the build by inspecting the output field of the BuildResult: result.output.contains('some text') == true Depending on what is on your plugin classpath, you may have tests fail due to issues related to the Xerces library. This is often due to multiple versions of Xerces being present on the classpath when the runner is executed and can be remedied by excluding Xerces from the generated classpath: pluginClasspath = getClass().classLoader.findResource('plugin-classpath.txt').readLines().collect { new File(it) }.findAll { !it.name.contains('xercesImpl') } Notice that we added a step to find all the classpath entries that do not contain the string xercesImpl to ensure that we do not end up with duplicate Xerces implementations on the classpath provided to the test kit runner. The Gradle Test Kit provides an excellent way to functionally test your custom Gradle plugins. Because it uses actual build scripts, it is easy to build up a library of configurations that you want to continually test as changes are made to the custom plugin. Furthermore, the Gradle Test Kit drastically reduces the amount of test code that you need to write, allowing you to more efficiently test your plugin. All of these are great reasons to convert your plugin tests to use the Gradle Test Kit or to write tests for the first time if you don’t currently have test coverage for your code.
http://jdpgrailsdev.github.io/blog/2016/03/29/gradle_test_kit.html
CC-MAIN-2017-39
refinedweb
1,174
53.61
LunaMetrics. Both do a great job of tracking videos that have been loaded with the page. However, both have difficulties with tracking dynamically loaded videos. That means videos which are loaded lazily, or in pop-ups, or when some content link is clicked. In this article, I’m going to show how you can track dynamically loaded videos using LunaMetrics’ solution with slight modifications. LunaMetrics is my favorite group of geeks in North America, and they bribed me with their awesome book just before Christmas, which would already be reason enough to use their code. However, their solution is also all kinds of robust, and it really lends itself to tracking dynamic stuff as well. What you’ll need The modifications will be done to the JavaScript code in the Custom HTML Tag. In other words, you can’t use LunaMetrics’ CDN for this. You will have to create a Custom HTML Tag, and copy-paste the code in the next chapter within. As for the rest of the stuff, just follow LunaMetrics’ guide. All this does is add a method you can trigger when videos have been loaded dynamically. Don’t worry, I’ll explain the method later on as well. The modified Custom HTML Tag So, follow the LunaMetrics guide, avoiding the CDN, and when you come to the chapter titled Google Tag Manager Installation, read the following instead. Create a new Custom HTML Tag, and give it some cool name like Utility – Video Tracking Bomb Overload Christmas Awesome. Sorry for that. Next, add the following code. Yes, that is a LOT of stuff. Bear with me, though. The modifications to LunaMetrics’ solution are few but significant. First of all, instead of using an anonymous function, we’re actually reserving a slot in the global namespace, and exposing the function in a variable named ytTracker. We do this because we want to invoke certain methods in the setup after the initial execution. If we didn’t expose a public method for it, we’d need to run the whole thing again and again, each time a video is dynamically loaded, and that’s just a huge strain on performance. Next thing we’re doing is adding a single line into the normalizeYouTubeIframe() method: The youTubeVideo.setAttribute() command is used to add a data attribute to all iframes which have already been treated by the script. This is because we want to avoid running the initialization methods again and again for videos which have already been configured for tracking. In my testing, if a video was tracked multiple times it led to runtime conflicts. Now that the data attribute is there, we need to check for its existence. In the checkIfYouTubeVideo() method, we’ll add the check like this: This check just looks for the data-gtm-yt attribute we added in the previous step, and if it’s found, the initialization is skipped for this particular video. This way only videos which have not been treated yet will be processed. Finally, we need to expose two methods in the ytTracker interface. These will let us handle dynamically added videos. Lets do that in the very end of the function expression. We return an object with two members: init and digestPotentialVideos. So, when we call ytTracker.init(), the script is basically run again, and all newly added, yet untreated YouTube iframe embeds will be processed and decorated with event trackers. If you want it to be a bit more robust, you can use ytTracker.digestPotentialVideos(iframe) instead. You pass an iframe object as its parameter, and the script will only treat the one video. This is better since it won’t loop through all the iframes on the page, and instead just decorates the one you want. Finally, set the Custom HTML Tag to fire on a Page View / DOM Ready Trigger. That’s the changes right there, ladies and gentlemen. How to use it As said, there’s two new methods in town: Making the solution work will still most likely require developer help. If your developers have added some complex, dynamic content loaders, which add the videos after some awesome jQuery fade-in has finished, you’ll need to cooperate with them to add the ytTracker.init() or ytTracker.digestPotentialVideos() command in the right place. Here’s an example of what a modified content loader might look like: After the video is injected, you call ytTracker.digestPotentialVideos(). You could use the init() method without any parameters, but since you already have the iframe element at hand, it’s better to use the more direct method. Summary Since loading content dynamically is notoriously standard-free, it’s difficult to give a generic solution for managing dynamically loaded content in your analytics or tag management platform. However, LunaMetrics’ original solution has a very nice set of tools to tackle dynamically loaded YouTube videos as well, without having to make drastic modifications to the code. LunaMetrics’ solution is so flexible and useful. Poking some holes into its interface to let some light out just makes it even better, in my own, humble opinion. The optimal way of working with it is to cooperate with your developers. I hope we’ve all matured past the notion of GTM being developer-independent. So, communicate with your developers, ask them to modify the JavaScript controlling the dynamic video injection, and request that they’ll add the simple ytTracker methods to the handlers they create.
https://www.simoahava.com/analytics/track-dynamically-loaded-youtube-videos-in-google-tag-manager/
CC-MAIN-2017-30
refinedweb
911
63.19
As we work on more and more complex problems, we need to start creating custom types to have manageable models for the data we’re working with. Python is an object-oriented programming language, and creating classes is something we will do frequently to solve problems using Python. In this hands-on lab, we’ll define a custom class with some functionality and attributes that will allow us to model an `Employee` in our code. The `using_classes.py` script includes code that will utilize this class and provide us with some feedback to know if we’ve created a class that meets our requirements. To feel comfortable completing this lab, you’ll want to know how to create and use Python classes (watch the “Creating and Using Python Classes” video from the Certified Associate in Python Programming Certification course). Learning Objectives Successfully complete this lab by achieving the following learning objectives: - Create the `employee` Module with an Empty `Employee` Class Before we write any code, let’s see what we need to do to get the using_classes.pyfile to execute to the next step by looking at the error: $ python3.7 using_classes.py Traceback (most recent call last): File "using_classes.py", line 1, in <module> from employee import Employee ModuleNotFoundError: No module named 'employee' This error shows us the first thing we need to do is create the module and the Employeeclass within it. The error doesn’t tell us anything else, though, so we’ll take the smallest step possible to move to the next step. Let’s create an empty class now: ~/employee.py class Employee: pass Running using_classes.pyagain, we’ll see a new error: $ python3.7 using_classes.py Traceback (most recent call last): File "using_classes.py", line 7, in <module> phone_number="555-867-5309", TypeError: Employee() takes no arguments - Implement the `Employee.__init__` Method Our next error is related to not having an __init__method that takes arguments. To know what we need to implement, let’s look at how the Employeeinstances are being created in using_classes.py: ~/using_classes.py from employee import Employee employee_1 = Employee( name="Kevin Bacon", title="Executive Producer", email_address="kbacon@example.com", phone_number="555-867-5309", ) employee_2 = Employee("Bruce Wayne", "bwayne@example.com", "CEO") # Rest of code omitted We can see here the positional order for our arguments is given from the employee_2line, and the name of the attributes are provided by the keyword argument usage when employee_1is instantiated. The phone_numberattribute is optional since it isn’t used to create employee_2. Let’s take this knowledge and implement the __init__method now: ~/employee.py class Employee: def __init__(self, name, email_address, title, phone_number=None): self.name = name self.email_address = email_address self.title = title self.phone_number = phone_number Now we shouldn’t have any issues creating our instances in using_classes.py: $ python3.7 using_classes.py Traceback (most recent call last): File "using_classes.py", line 12, in <module> employee_1.email_signature(include_phone=True) AttributeError: 'Employee' object has no attribute 'email_signature' - Implement the `Employee.email_signature` Method The last few expressions in the using_classes.pyfile demonstrate how the - By default, the phone_numberattribute is not included in the string that is returned; but if include_phoneis true, then it is added to the end of the second line. - If there is no phone_number, then the phone number portion will not be printed, even if include_phoneis true. Let’s implement this method now: ~/employee.py class Employee: def __init__(self, name, email_address, title, phone_number=None): self.name = name self.email_address = email_address self.title = title self.phone_number = phone_number def email_signature(self, include_phone=False): signature = f"{self.name} - {self.title}n{self.email_address}" if include_phone and self.phone_number: signature += f" ({self.phone_number})" return signature By checking if include_phoneand self.phone_numberare both true, we’re able to determine if we should add the phone number to the signature. Let’s run using_classes.pyone more time to ensure everything works. We should see no output if we’ve implemented the method correctly. python3.7 using_classes.py
https://acloudguru.com/hands-on-labs/creating-and-using-python-classes
CC-MAIN-2022-05
refinedweb
659
50.33
The QDLClient class manages a set of QDLLinks for a client object. More... #include <QDLClient> Inherits QObject. Inherited by QDLEditClient. The QDLClient class manages a set of QDLLinks for a client object. The QDLClient class stores a set of QDLLinks for a client object. QDLClient manages the stored QDLLinks, allows links to be requested from QDL data sources and to activate a link on a QDL source. The links managed by the client object can be saved using QDLClient::saveLinks() and can be restored using QDLClient::loadLinks(). Alternatively if the client object belongs to a group, QDLClients::saveLinks() and QDL::loadLinks() should be used for convenience. This class operates independently of any other object. The subclasses QDLBrowserClient and QDLEditClient extends this class to operate on a widget and its text dependently. See also QDLBrowserClient and QDLEditClient. Constructs a QDLClient, parent is passed on to QObject to establish the parent child relationship. The QDLClient is identified by name, which should be unique within a group of QDLClients. name should only contain alpha-numeric characters, underscores and spaces. Destroys a QDL Client. Activates the stored QDLLink identified by linkId on the QDL data source. Adds the link stored in link to the client object. The link Id is returned if the link is added correctly, otherwise 0 is returned. QDL sources create link, which is returned during QDLEditClient::requestLinks(). Therefore in normal usage this method isn't called directly by a client. See also requestLinks(), setLink(), and removeLink(). Sets the broken state of the stored QDLLink identified by linkId to broken, which is used to indicate that the data item at the source is no longer available. See also removeLink(). Removes all links from the client object. See also requestLinks(). Returns the hint used when requesting QDLLinks See also setHint(). Retrieves a copy of the stored QDLLink identified by linkId. If the copy of the QDLLink is modified, QDLClient::setLink() should be used to update the stored QDLLink. If linkId is invalid a null QDLLink will be returned. See also setLink() and linkIds(). Returns the rich-text anchor for the QDLLink identified by linkId. The anchor will not contain the QDLLink icon if noIcon is true. Returns a list of all the stored Link IDs. See also link(). Loads the links in stream into the client object. stream is generated by QDLClient::saveLinks(). See also saveLinks(). Removes the link stored identified by linkId, and updates the anchor text for the link in the parent widget's text. See also addLink() and setLink(). Requests QDL links from a source. The user selects the desired source from a list of available QDL sources, the list is a modal dialog connected to parent. See also clear(). This is an overloaded member function, provided for convenience. Requests QDL links from the QDL source described in qdlService. This method can be used when the QDL source is known. See also clear(). Saves the stored links to stream. See also loadLinks(). Sets the hint used when requesting QDLLinks See also hint(). Updates the link stored identified by linkId with link. See also link(), addLink(), and removeLink(). Returns true if linkId identifies a stored QDLLink; otherwise returns false. See also linkIds(). Verifies the correctness of the links stored by the client object. This method determines if QDLLinks are broken, and ensures that all stored links have are properly configured. See also addLink(), setLink(), and removeLink().
http://radekp.github.io/qtmoko/api/qdlclient.html
CC-MAIN-2022-27
refinedweb
566
78.35
Red Hat Bugzilla – Bug 22064 Gcc (2-95.2) fails to include sys/time.h Last modified: 2007-03-26 23:38:14 EDT This sounds too broken to be true, so this must be a pilot error... After Configuring/Building and Installing Gcc 2-95.2 from Edition 15 of FSF, patching glibc (using glibc-2.2-5.i686.rpm), the following program fails to compile: #include <iostream> #include <time.h> #include <sys/time.h> int main(int argc, char *argv[]) { struct timeval tv; struct timezone tz; gettimeofday(&tv,&tz); std::cout << "TimeVal: " << tv.tv_sec << " " << tv.tv_usec << " TimeZone: " << tz.tz_minuteswest << " " << tz.tz_dsttime << std::endl; } The error is inside sys/time.h inside the prototype for gettimeofday. #if defined __USE_GNU || defined __USE_BSD typedef struct timezone * __timezone_ptr_t; #else typedef void * _. */ extern int gettimeofday(struct timeval *__restrict __tv, __timezone_ptr_t __restrict __tv) __THROW; The second parameter confuses gcc with the following error message /usr/include/sys/time.h:77: two or more data types in declaration of `__tv` (the line nb may be different). However if instead of defining a __timezone_ptr_t you simply define a __timezone__t which goes via the same typedef but does not includes the pointer declaration (*), and explicitly add the pointer specification before the __restrict, the you have no problem. Gcc compiles and the program runs. Note that I compiled with and without -D__USE_GNU, I also attempt -U__USE_GNU I do not beleive that such an commonly used function could fail so simply. Given that dropped from my RedHat 7.0 gcc 2.96 I cannot tell if this would have compiled under it (one would assume so). My guess is that I do not have the matching set of include file for the benefit of 2-95.2. Indeed performing a grep on time.h from / I can see that under /usr/i386-glic21-linux/include/sys/time.h I have headers which would have compiled correctly, however since I am using glibc2.2-5 (see above patch) I should not be affected, besides this is a compilation problem isn't it ? What does rpm -q glibc-devel tell you? The stuff you cite from sys/time.h looks like content of glibc-2.1.92-*, and was changed in early September to work around bugs in gcc 2.95.x (the code is perfectly valid C, just 2.95.x does not cope with it well).
https://bugzilla.redhat.com/show_bug.cgi?id=22064
CC-MAIN-2018-47
refinedweb
398
68.87
Explaining the need for a new module system:. The Java Module System intends to correct these problems by including: o. o A versioning scheme that defines how a module declares its own version as well its versioned dependencies upon other modules. o A repository for storing and retrieving modules on the machine with versioning and namespaces isolation support. o Runtime support in the application launcher and class loaders for the discovery, loading, and integrity checking of modules. o A set of support tools, including packaging tools as well as repository tools to support module installation and removal. The JSR has just been submitted and is due for it's review ballot/voting by June 27th, before continuing further development. Should it pass, the intention is to have it be a part of J2SE 7.
http://www.theserverside.com/news/thread.tss?thread_id=34578
CC-MAIN-2014-10
refinedweb
134
54.73
Microsoft and AOL Fight Over Instant Messaging 381 Fizgig writes "Is it just me, or do they only call for standards when they're not winning? Microsoft just released their MSN instant messaging client, which could talk with AOL Instant Messenger users. AOL then changed the protocol slightly to break Microsoft's. Now Microsoft is calling for standards. And they somehow managed to mention Linux in a story that really has nothing to do with it. " Update: Around 11:30 p.m. EST, Keefesis noticed that MS had released an updated version of their Messenger client that works with the latest version of the AOL product. This MSN page has details. Re:The Gates have two sides... (Score:1) Re:Predictions for how this thread will play out (Score:1) Quick...get those slashdot blinders on. Yahoo, Prodigy? Anyone, Anyone? I guess it's okay to screw over anyone, as long as MS is in the group too. btw...Microsoft already worked around [news.com] AOL's block Slashdot should change it's motto to: If you don't have anything mean to say about Microsoft, don't say anything at all. IRC (Score:1) WOW is this crowd unbelieveable!!! (Score:1) But since this is slashdot, the only cool thing to do is to post against MSFT. Get a brain, there are *other enemies* out there!!! If AOL could have bought Linux, they would have... Re:The Gates have two sides... (Score:1) Standards (Score:1) "Deanna Sanford, MSN's lead product manager, said Microsoft invited AOL to join the Internet messaging standards effort two years ago, but AOL refused." And I remember reading somewhere on MS's page, over a year ago, a comment on how they would like to see a standard. Note that this was before AIM was in full swing, and before AOL owned ICQ. Slashdot makes it sound like MS just started clamoring for a standard, when they have been clamoring for a standard for years. Yet everyone is echoing that MS is pissed that AOL has blocked them, so now they are wanting standards. Try actually READING the article before you start quoting from it next time please. BTW, I found it rather ironic that this article, which does have a pro-ms feel to it, came from a netscape site. Re:Instant Messaging Standard (Score:1) I used to work for MS. (Don't worry, I came to my senses and am now a paid open source developer.) I actually took the job because they told me I'd be working on open standards for instant messaging. It was pretty tough to believe, but I had to go see. Lo and behold, my managment structure actually supported my work in the IETF. Read the archives at Notice that people from microsoft engaged in open discussion of what they wanted out of an instant messaging protocol. Note how many AOL/ Mirabilis folks got involved in the requirements discussions. In this one instance, I believe that the evil empire did the right thing. Really. jesse@fsck.com (a co-author of the impp requirements draft) Don't side with AOL or MS. (Score:1) Microsoft is asking for open standard to instant messaging. Even if we have one, there is no fair competition. As an example, IE tactics on Navigator. Microsoft will use Windows as the leverage to kill off AOL's product. With a standard, AOL will be out in the open in Window land for MS to shoot down. AOL propriety instant messenger is it own protection holding up against Microsoft's product. Microsoft just want the standard, so makes it much easier for them to compete with AOL by using Windows. Like they always had. Linux is the sand box where we all play fairly, own by no one. Yes, Linux is our future. *** Microsoft and AOL shoot it out, but... (Score:1) They say in the article that they want any user to be contactable, just like phones, regardless of manufacturer. Does that also include users that happen to use a unix or linux platform? What about a Mac? I highly doubt it. They'll support standards as long as they are PC-based, and running on the Microsoft OS (whatever flavor happens to be current at the time). Hardly standard, if you ask me... Microsoft's plan (Score:1) Although MS has missed the boat in terms of capturing user base (AIM, ICQ) This market is very volatile... As soon as the following is done, MS WILL win... (1) Make MSNM Simple+Fast, with a CLEAN interface and a set of features along the lines of Block User; Send File; Send through server + option for (winamp-like?) plug-ins... (voice chat; video; white-board...) (2) Bundle it with everything they ship... (3) =) Make it part of Windows2000 - with a default feature of "run-upon-connecting-to-the-net" (4) Make "Choosing Your Online Identity" part of the Windows2000 Installation wizard... That should get them about 30 percent of the market share within 3 months... and the rest will follow "to be able to chat with friends" Watch AOL be the one to cry "foul!" then.... There's already an Open Standard... (Score:2) IRC development is already decentralized as well, any new feature is developed in the server first, then the client. And most servers are open source. My friends, family, etc, all keep telling me to get ICQ or AIM but I always say no. I already have a real-time chat program, and a presence on a few channels. In my opinion, all these new messangers are just trying to reinvent their own proprietary wheel. --Eil. Re:There's already an Open Standard... (Score:1) Personally, I consider it a feature. Each network is different, and I can easily connect to any one I want to at any time. They're run by different people, have different features and setups, and serve different purposes. DALnet, for instance, has "Services": NickServ, ChanServ, and a few others. Some people like these. I, personally, don't. Some networks allow as many bots as you want, some outlaw bots altogether (most of the time each server has a different policy). Different people like different setups. Also, some servers are "specialized" opposed to the Big IRC networks which are for all kinds of things. Slashnet, for instance, attracts a certain kind of crowd. Likely, whatever IRC network some Cracker group sets up with attract a certain kind of crowd. In a GPL'd project I'm involved in, we have our own IRC server that we use to discuss things, and it's very handy to have full control over it and have access to any channel we want. Not to mention the fact that we're free to have as many handy bots around as we want. Re:What's all the fuss? (Score:1) Re:There's already an Open Standard... (Score:1) IRC Instant Messenger? was: IRC? (Score:1) The only program I can see would be how to take care of nicks. The most intuitive way would be to use addresses like MagPulse@efnet. The client would have to connect to all the different IRC servers though, or at least connect and disconnect, a la GameSpy. Microsoft in the right this time (Score:1) I didn't expect to say this anytime soon, but AOL seems to be in the wrong and Microsoft in the right this time. Certainly Microsoft is being hypocritical (see other's posts about Samba, etc) and it would be nice to see them shut up until they mend their ways, but AOL seems to be wrong by changing the protocol in order to break MS's client. Fair is fair; you can't condemn MS's wrongs while supporting AOL's, even if AOL's are against MS. Either condemn them both or support them both, but don't say that AOL is somehow more saintly even though they're using the same tactics as MS does. Re:Open Standards are Good (Score:1) Hold on a sec, What would you do if You had a product that cost multiple thousands a month to keep working and I wrote a product to use your servers but then showed my ads? AOL is proabably paying alot for the bandwidth to be able to have AIM user not using AOL. Now MS creates their product that uses AOL resources. Are we just supposed to say hey thats ok, what would be the next resource of someone elses that they stole? E-mail works because everyone shares their resources. one of the reasons spam is such a problem because its people stealing resources from ISP's. This is close to the same thing and IMHO would be in the exact same boat if MSN ever put ads into thier client. LBS Re:There's already an Open Standard... (Score:2) For what it was designed to do, it was designed pretty nicely. Re:Time for a GPL'd cross platform solution (Score:2) As long as the protocol itself is freely implementable/extendable under other licenses and not hindered by any licensing restrictions of the GPL, it has a chance of working. Re:Aol is dumber then microsoft. (Score:2) Smart move, actually.. Once they start putting ads in all their products, they'll have even more products to sell ad space in instead of just having only one big product to sell ad space in. Re: Anti-GPL Rant (Score:2) You can license your work, but if the protocol is not openly implementable across different licenses then it is proprietary to GPL-based platforms and therefore unusable by larger segments of the computing community. How would you feel if you wrote a nice platform independant messaging client, with plugin modules to easily add support for AIM, or ICQ, or any future protocol, etc, and then MS came, took the code, released it in future versions of windows, and sold banner space, increasing their revenues while denying users the benefits of the OSS that you wrote, and without even paying you for it? I'd feel pretty good, because, in the end, they'd still have to give me credit for it. :) Re:IRC (Score:1) jason Beautiful example (Score:2) Look at instant messaging- sort of like the cell phones of the Internet, annoying but some people absolutely love it- but guess what? There are major security lapses in established products, there's no way to audit the code or have anybody audit the code to track such security holes, and the proprietary vendors are fighting each other to death without caring a tinker's damn about their customers. It's a complete power game and has nothing to do with providing value to customers. Next thing you know, they'll have the whole field tied up in patents and nothing will be compatible with anything else. This is horrible. It's disgusting, and it's hopeless to expect these large corporate companies to act any other way. MS is inciting people to ignore AOL's TOS. AOL is churning their messaging format to break the MS client. They're not going to stop- internet messaging is going to remain a battleground. MS is probably going to behave more like a good guy in this situation- but can you put a price tag on having a choke-hold on internet messaging? They're not in it for their health, and they're damned well not it in for benefitting customers. It's a vitally important leverage point for controlling information flow, and they will capture it (all of it) at any cost- to use as leverage for controlling even more. To hell with all of them. Use the situation to highlight how pathetically little freedom the mainstream computer consumer actually has. If you go with the commercial sector, you have less and less power over your own fate- things are shaping up to really turn the screws. Picture it: "Oh yeah? Well, we'll revise AIM so _only_ the newest clients can use it!" "Oh, you think you're tough? We'll have IE install _our_ client by _automatic_ _update_." "Bastards! We'll put strong encryption on ours and sue you for enticing our customers to violate our TOS!" "Ha, nice try- we'll make our stuff require a _PIII_ with the serialization turned on, and have our clients reference a database at microsoft.com to guard against anyone stealing our users' identities." "Oh yeah? We'll require the PIII too, and patent all variations of our method..." This is a good direction to be moving in? Re:Listen to M$ cry when the tables are turned (Score:3) "Well it's okay when people do it against Microsoft, but not okay when they do it to other companies." Re:Wrong! (Score:3) Instant Messaging, Instant Shmessaging (Score:1) This has been AOL policy for a LONG time. (Score:1) But MS reverse-engineered OSCAR. OSCAR includes a client ID, and unless MS falsifies that to make it look like an official AOL client (which is illegal), it was just a matter of time before AOL killed their client just like every other OSCAR-using AIM clone. Yes, AOL did kill fAIM, etc. by blocking them. The final deathblow was the open (and in many cases better) TOC protocol. The only problem with TOC is that it lacks user searching, otherwise it's always farther along than OSCAR. Re:TiK status (Score:2) Back to the subject at hand: *Microsoft* whining about *AOL* not following standards is surreal. Still, agreeing upon a common standard, regardless of who proposes it, would be a very good thing for instant messaging as a whole. -- Patch already available (Score:2) -- ?Microsoft supports standards? (Score:2) ?Standards are important,? said an unnammed Microsoft spokesman. ?It?s important that Instant Messaging be standardized, just like the world wide web is.? -- Re:?Microsoft supports standards? (Score:2) Obviously, I failed to get my point across. :-( -- TiK still downloadable from AOL (Score:2). gz [aol.com] -- Re:Standards (Score:1) I'm not saying AOL randomly changing their protocols is a good thing... but considering how many times Microsoft has done the same with their own products, they should either put up or shut up. Re:Prediction on How This Will Play Out (Score:1) Guess this makes me a pundit too, huh? It's contagious! Re:What about ICQ? (Score:1) I'm not advocating AOL's action. However, I can certainly see why they did it. Re:Did I miss something? Or did you? (Score:1) Re:Summary (Score:1) Re:Standards (Score:1) Also, like you noted, rather ironic that something so simple took them 2 years to develop. Maybe I'm just being an insufferable prick... but to some degree, turnabout is fair play. Re:Yes, they love standards ;) (Score:1) They only love their precious 'standards' when it suits them... Re:Review of Microsoft Messenger Service (Score:1) Re:Aol is dumber then microsoft. (Score:1) Re:Review of Microsoft Messenger Service (Score:1) I wouldn't mind being wrong about this. They're both wrong (Score:2) AOL is trying to shut out others to protect it's turf. MS is crying about the same tactic it has used without mercy for years. From the comments I've seen, people are unsure of who's in the right here. The soultion: Hate both of them! AOL is wrong (and possibly stupid) to try to cut off communications with others and spread FUD. Microsoft is wrong for complaining about the very tactics they use everywhere else. AOL does have a right to say who can use their servers. If they want to block non-AOL connections, that's their business. The fact that MS managed to modify their client to work around whatever AOL did indicates to me that AOL made changes to bar a particular client software rather than control who can use their server. That's a different matter entirly. In spite of thinking that AOL was wrong to do that, I still don't feel sorry for MS. I just hope they standards war themselves to death soon so the rest of us can put together a real standard. Re:The Gates have two sides... (Score:1) Lynx *IS* a damned fine browser for striping out the bullshit jackasses like yourself insist WWW pages should force on people. It also converts HTML to plain text better than anything else. The DCOM wire protocol spec? (Score:1) If I could have gotten an e-mail address, I would have e-mailed you your answer directly! People here on slashdot *want* to help each other - please let us help you by facilitating the paths of communication. DCOM uses so-called Microsoft RPC to connect COM objects on separate machines. Microsoft RPC is a derivative of DCE RPC - the RPC standard which is part of the Distributed Computing Environment. DCE was created by the Open Software Federation (OSF) in the late 80's to compete with Sun's ONC (Open Network Computing) environment. Sun's simple and ubiquitous RPC implementation is the lynch-pin technology in ONC. DCE is a more complex alternative. It is not open source, but I believe all of the protocols and interfaces are openly documented. Microsoft RPC uses the same wire protocol as DCE RPC - I believe it may still be exactly the same, but I'm not sure. They did change the C API, but I believe Microsoft RPC clients can still call DCE RPC servers, and vice versa. So the answer to your question is rather simple. If you want the wire protocol spec for DCOM, it is the one for Microsoft RPC, which is the same as the one for DCE RPC. The Open Group, the descendant of the OSF, is the current keeper of the DCE specs, and now also the COM and DCOM specs, BTW. Have a look at: and your journey will begin. Good luck! Standards and hypocrisy (Score:3) Incidentally, Microsoft just happens to be especially shameless in doing this. AIM or MSN -- neither are any good (Score:2) I'd rather see open ICQ protocols than a standard based on AIM. UGH! Hopefully AOL doesn't have any plans on turning ICQ into AIM. Re:TiK status (Score:1) # # AOL AOL. I don't think the "disparaging to AOL" part invalidates it's open-sourcedness, although it is a bit amusing. I suppose I can't bad mouth AOL using Tik. AOL must have done something very specific to disable the MSN version because Tik version 0.58 still works with no problem. You can still access the old Tik page from google's cache [google.com] (a very nice feature IMHO), although probably not for long. All the links from it are active but the original page is not. -- Re:TiK still downloadable from AOL (Score:1) -- Sorry, more recent versions are GPL (Score:1) decompile, reengineer or otherwise copy the Service.") Wow, I didn't even know I had the "Service's" executable. -- Re:Why AOL is (legally) in the right (Score:1) these clients are not welcome. It's also worth pointing out that the Tik (A Tcl/TK AIM client that works on any thing Tcl8.0 will run on) is distributed by AOL and comes with a licence that allows use of the AIM service. That's not to say that they couldn't break compatibility. They still could, but it would probably break their own clients as well. They did take the Tik page down but the links from it are still up, see discussion elsewhere in these comments [slashdot.org]. -- Microsoft double standard (Score:1) ytalk!! (Score:1) Much as I hate to defend MS... (Score:2) Granted, Microsoft's motives in releasing this client are doubtless sinister. They want to control this market too, and will somehow manage to Embrace and Extend this protocol to do it (don't ask me how they'll do it without being obvious). But they're right to blow the whistle on AOL for this action. Re:There's already an Open Standard... (Score:1) Re:Yeah, I really want a Microsoft Standard. (Score:1) Why AOL is (legally) in the right (Score:3) But that's not what Microsoft did. Microsoft created a client that interacts with AOL servers to communicate with AIM clients. On the internet, your computer is your castle. If you own a computer on the internet, you are allowed to accept or reject any connection for any reason. It may well be illegal for Microsoft to continue to distribute a client that interacts with AOL servers against AOL's explicit wishes. The AOL AIM client license agreement contains a clause permitting connections to AIM servers run by AOL. The MS client contains no such permission. Microsoft has no legal entitlement to distribute clients which interact with AOL servers. It's worth pointing out that the free Linux AIM and ICQ clients may also one day be illegal to use, if AOL makes it known that connections from these clients are not welcome. As for myself, I use IRC and Unix talk. Why rely on proprietary software using proprietary protocols connecting to proprietary machines under questionable legal foundations, when superior open solutions have long existed? Finally, I cannot help but resist noting that Microsoft is one of the worst offenders in the area of open/closed communications standards. The closed Microsoft Office file formats are the most formidable protection for their profits and monopoly. For Microsoft to complain about AOL's closed communications protocols is the height of hypocrisy. RFC1312 (Score:1) Which is good for multiuser UNIX machines; for dialups it could be extended to use a directory server and/or redirector (akin to a mail server). It's a pity 1312 wasn't more widely adopted. Re:MSNM vs AIM? Oy vey (Score:2) And why do the masses not use ytalk? Its the best! Re:There's a problem (Score:2) Re:There's a problem (Score:2) My favorite quote (Score:5)." Re:Here we go again . . . (Score:1) No they didn't, they own Netscape but don't use it in their products. They had an agreement with MS that they had to distribute IE in return for getting an icon in the 'Online Services' folder on the Windows desktop. Ironically the only reason the 'Online Serivces' folder appeared in Windows was so that they couldn't be accused of trying to make MSN a monoploy but to get your icon on the desktop you had to distribute IE encouraging that to become a monopoly. -- I just tried it (Score:1) Now, if only MSNM can get themselves to work with ICQ SOLUTION! (Score:1) :) haha. IRC? (Score:3) We don't need no steenkin AOL or MS bull. BTW: Did anyone every take a look at MS Comic Chat, MS's bastardized version of IRC? Quite amusing at first, but it's really annoying after a while. Reminds me of Windows actually. Prediction on How This Will Play Out (Score:2) Damnit, now I'm a pundit! Watch it kids, or you'll end up like this too! Am I really off base on this? Isn't this what happened with just about every market MS gets into late? How to make IRC work (Score:1) The reason IRC servers bog down so bad is they are connection-oriented (TCP). Everything sent between IRC servers and clents is a static connection. Change the TCP connection to handling just control information, and use UDP with an ACK protocol on top of it, and you have a lightweight, mostly-connectionless communications standard. Even this change would comply with the IRC RFC, if I remember correctly, since I don't think it specifies the transport for the protocol, just the contents of it. Here we go again . . . (Score:1) Then Gates goes "Psh. Java. Wotta fad", then when developers really start to toy with it heavily, he hasta get a license, make his incompatible version, and grow some lettuce from that. Now the messenger. And as someone had mentioned earlier, ya, he'll prolly add some super spiffy features of his own for his product, yet only after standards have been somewhat defined but of course. But some questions remain... What about NetMeeting? It already has file transferring, chat, whiteboard, voice and video and does it fairly well mind you. Are they going to drop that like a rock in an attempt to market something more familiar to a consumer, or what? And is this "revenge" against AOL? AOL is currently squashing MSN in the consumer ISP battle, and dropped MSIE as its browser for Netscape not so long ago. Is this Microsoft's subtle (or not so subtle) way of fighting back? On the bright side, at least Gates has never really made these killer apps right off the bat, least other companies get to live a little before MS steps in with their own concoction. Gates certainly isn't the master of the obvious. As for Linux being mentioned, it's nearly a buzzword. I'm waiting for Al Gore to start mentioning it randomly in his speeches. ; ^) --Me ------------------------------------------------- If porngraphy is the practice of taking photos of the nude, and a pornographer is the person that does it, does that make the photos pornographs? Re:open standards? what about outlook and hotmail? (Score:1) In Hotmail's case that's qmail [wow, MS uses the same software as I do... qmail on Hotmail, Apache on parts of MSN] which you can easily see when reading the headers: Message-ID: <19990723040816.91017.qmail@hotmail.com> Received: from 209.26.94.126 by with HTTP; Thu, 22 Jul 1999 21:08:16 PDT Time for a GPL'd cross platform solution (Score:2) How much server horsepower would something like this take, though? I guess that's the problem w/ IM - you need a central server(s) to keep track of who's online? On to another thought, how long 'til AOL shuts down all the Linux ICQ clients because they don't display advertisements? Re:Time for a GPL'd cross platform solution (Score:2) Re:IRC Instant Messenger? was: IRC? (Score:1) Re:Microsoft double standard (Score:1) Re:IRC Instant Messenger? was: IRC? (Score:1) Re:Predictions for how this thread will play out (Score:1) On the other hand, I admire AOL (as much as it hurts me to say that I admire AOL for ANYTHING) for flipping MS the bird this time . . . Re:How rude! (Score:1) Unfortunately for Microsoft, despite all their cries and screams about "The Freedom to Innovate," they really aren't all that good at innovating. They've missed the boat, and since they can't buy it, they're trying to hijack it by pushing their way into a closed standard product, and then screaming bloody murder when AOL changes their protocol. The bottom line is, MS will only support open standards when it benefits them. Numerous other folks have already mentioned Java, JavaScript, HTML, MS APIs, file formats, and the like, all of which MS has either developed on their own and kept quite secret, or they've tried to "fix" the standard to benefit themselves. Re:There's a problem (Score:2) Yeah, I really want a Microsoft Standard. (Score:3) Everyone should have an IM address like or the same as their email address. Some sort of IM server should become a standard service like popd or imapd. You punch in someones IM address and it goes to your IM server. Your IM server then finds their IM server by piggy backing off the MX record in DNS, it would be better to have a unique record type. Their IM server says "yes they are online" and patches your IM client to their IM client. When both parties are online, a client to client connection could be established, if the requested party is not online then their IM server could store the message until they got online (ala ICQ). This would be a decentralized comodityized method that could be implimented on any platform. -- If it all runs off AOL servers, keep it closed. (Score:3) AIM should stay closed unless they open the server software and JoeISP can start their own AIM server and sell AIM banner space. If I had a chat server like that, I wouldn't want MS to run their clients through it AND get the money from the banners. Someone has to maintain the machine and pay for the bandwidth. Let MS do that themself. This isn't like E-mail where the bandwidth and servers are spread throughout the planet. Oh,. and the last word. Go figure no one is using their chat client. No one seems to use their play software unless they cram it down the collective public throat. (What happened to ComicChat again?) You better belive this new MSchatboy will be avalable on the desktops of Win 2000. One year later it will be the most popular chat client with the BORG collective. Every Windows magazine will give it 4 stars and rate it a "must get". (No one wants to loose that MS advertising dollar) TiK is the casualty of this, and more articles (Score:5) Also, geeknews.net [geeknews.net] has been keeping up pretty well on this. Here's a news.com article, too: Another interesting thing is that MS released a "fixed" build, which AOL then broke again. Round and round we go. strategy is important. (Score:2) >> You just answered your own >> question. AOL is everywhere, not as a single >> entity. >> >> Good or bad, irrelevent. it's relevant to it's stock holders, I guess. and to people like me who would really like Microsoft to loose this perticular battle. having many diferant products that hardly interact, IMHO is a recipy for disasture. -------------------------------- Aol is dumber then microsoft. (Score:4) A year ago I though exactly the oposite of this. buying Mirabilis (ICQ) was probably the smartest thing AOL did, but I feal they missed the boat compleatly with ICQ. ICQ over the past year became Bloatware full of unnecacery features. Yet the most annoying glitch of the ICQ systems STILL hasn't been dealt with (it's security and privacy, obviously. The is NO help from ICQ when you account gets hijacked by some Script Kiddy (I know, I had my 102541 account hacked, and yes, that really WAS my number, I asked ICQ for support in retreaving my account and they opted to do nothing. It took a local reporter who wrote a story about this to make then delete my own account.. well thats better then nothing, I guess. But even wierder still is the fact the AOL has left AIM and ICQ together side by side, and opted NOT TO put the two together. I really don't understand why one company should have two versions of the same type of aplication, really stupid. It looks like with them buying mp3spy, they will have three programs that have somewhat similar functions, whats that all about?! AOL also didn't integrate ICQ into netscape (they stayed with AIM for that). why?! Yahoo, for example, are smart. every function they get through aquisition is integrated into the main database so one user can control all of them (stocks, geocities, games, ets). AOL decided it would leave everything as it is, and are confusing thier own costemers. If they don't change this, they will loose the battle. Amazon also do this pretty well. But AOL is all over the place. I simply do not understand what they are doing over there. -------------------------------- Open Standards are Good (Score:4) Rant (Score:2) talk has been around a long time. Predictions for how this thread will play out (Score:4) 2) A vocal minority will try and bring some sanity to the discussion by arguing that AOL's tactics hint of an attempt to become a very Microsoft-ish company. Who wins is anyone's guess. Microsoft Standard(s) Practices (Score:3) MSNM looks harmless now, it's just a way to get more people communicating and interacting, right? WRONG. This will be like Internet Explorer - at first it was just a joke, but then version three came out and everybody stopped laughing. MSN Messenger (what a unique name) has started out like a joke, but before long it will come with every version of Windows and offer features far beyond what AIM has. Oops, you can't see what I'm doing because you're still using AIM. Better get the cool new one that lets you do more stuff! Heck, MSNM already lets its own users communicate with AIMers, but not vice-versa. How long before it totally makes AIM unnecessary? AOL is justified to do what they're doing, AIM isn't a standard. If it was a standard, Ms could do like they did with the W3C and pollute the standards to favor their products. AOL has let Yahoo! get away with cloning AIM because the Y! one has the same features as AIM and works well with it. MSNM is just a plot to pull people away from AOL. More power to them for blocking it! How rude! (Score:3) Also, one wonders how much reverse engineering the poaching required on the part of Micorsoft, ever the stalwart defender of Intellectual Property rights. Finally, one is struck by this quote from the Wired coverage [wired.com]: MSN Messenger is the company's first entry into an already popular category of messaging services. What was that bit about The Road Ahead, Bill? Missed the boat again, didja? I don't feel sorry for Microsoft (Score:2) What's all the fuss? (Score:2) Advantages: Can someone explain why ntalk is sufficient or, if there is some little niggling reason, why we couldn't just add to ntalk rather than re-inventing the wheel? --- Put Hemos through English 101! And now, time for something completely different.. (Score:2) #include "and_no_not_the_movie_although_it_was_excellent.h /* Alright slashdotters, I'm going to go against 99% of you and say "I'm with Microsoft on this one." Why? Just because discussion is good. */ First, AOL scares me. Like previous posts have said, they cater to newbies. But they also cater to a worse off group. The uniformed. AOL puts thousands of people on the net a day who have no idea what nettiquite is. People have no idea what a computer is and how it works, and AOL lets them jump onto this mysterious void called "The Internet" and do stuff. Meanwhile, AOL indoctrinates them. People start thinking the Internet is a big happy fun loving place, where you click on the pretty buttons type in stuff, and *BOOM* you get a happy reply. AOL users are forced to use a half-baked software product because that's what AOL tells them is out there. Replace "AOL" with "Microsoft" in the previous sentance, and you see an argument stated many times on Slashdot. Yes, that's right: AOL:Internet::MS:Operating Systems "Ok," you say, "AOL is as bad as MS...so why do you say that you're going with MS??" I'm looking at the track record of the two companies. AOL seems pretty consistant. Deliver junky service and mess up billing. Microsoft, on the other had, has proven to deliver sucky products. But they've heard the cries of the people, and they are (very slowly) responding. Their web browser keeps on improving. Win2k crashes less than NT which crashes less than 95/98. Microsoft products are improving, which means, they are starting to listen to what people are saying. They're starting to take steps in the right direction. Whenever I've heard of people complaining to AOL in the hopes of change, AOL has been less than hospitable. Another item: MS is working with other people to establish standards for internet communication. They at least made the effort to include AOL's populace in their work...and AOL stuck their packet sniffing nose up in the air at them. MS at least made the effort to work with others. So, by a nose, MS edges out AOL in my /* My real opinion? I'm touched that you asked... Both companies suck, and I can't believe they're both squabbling over small stuff. I have some advice for each company tho: MS: Kill marketing. Beat them with the Office97 paperclip if you need to. They're forcing you to realase software when its not done. Elminate them, and NT has a chance of becoming respected among the *nix community. Sun/AOL/Netscape: Sun, I'm sorry you got stuck with those two. Drop them ASAP. */ -------------------------- What about ICQ? (Score:3) Re:The Gates have two sides... (Score:2) Boy, you must have a taste for irony... surely everybody here is adult enough to just admit that IE is a hell of a lot closer to W3C compliance than Netscape is? They both suck, but IE sucks less. Even AOL/Netscape must think that Communicator is crap, otherwise why would they have trashed the Communicator code base for Gecko? or how about one called JavaScript? (a) JavaScript is not a standard. (Since when does Netscape set standards? Their "standards" are the primary reason half of the world's web pages don't work in all of the available browsers.) (b) IE runs JavaScript just fine - at about twice the speed of Communicator. And how about some APIs that work the way they are documented to? Huh? You mean the argument's changed from "the APIs aren't documented"? Gee... the argument's evolving... a moving target! Sorry, I must have eaten something bad a lunch, 'cause I'm sure in an argumentative mood. Didn't mean to take it out on you. Apologies Cheers Alastair Sounds like Calvinball... (Score:2) Microsoft likes to play Calvinball [Ed. note: Calvinball is a game where Calvin makes up all the rules as the game is played and you can never use the same rule twice.] but only when MS gets to be Calvin. I guess they, like Calvin, don't like their own tactics used against them. I can hear the cry already: "But you critized MS for using those tactics. To be consistent you must critize AOL for using them against MS." Sorry, but as an Old School disciplinarian I must wait for the "eye for an eye" standard to be satisfied before I complain about others using MS's tactics against them. IRC's place (Score:2) I use ICQ to communicate with my less-computer-savvy friends, some of whom had trouble downloading and installing ICQ on Win9x systems. I can just imagine the chaos ensuing when they connect to an IRC server for the first time from mIrc. Aol's AIM is, well, obviously for the same crowd as ICQ. It is, after all, newbie-hunger that most characterizes AOL. Tell your standard AIM user to connect to #ohsoeasy on efnet and, well, see what the response is. Sure IRC can do basically everything that ICQ/AIM can, (perhaps auto-login stuff excluded, though I'm sure there are scripts...) but good ole 'mail' (or better yet telnet mail) can be used quite adequately to send mail. That doesn't change the fact that my uncle will still and for the forseeable future use Outlook. That, and to be perfectly honest, I find ICQ easier for simple messaging than the somewhat cumbersome IRC. There is already a solution being worked on. (Score:2) How nice of MS. (Score:2) It's like MS Java. Someone had a great idea and MS decides to steal it. Once they have a market share they can eliminate all competition. Even if they have to take a loss to do it. This is a clear example of MS's attitude that every computer in the world should be running *only* MS software. Why I think MS will win: (Score:2) Though they may use the same bundling tactics that made IE so popular, their use of automatic updating virtually insures their victory. When the update was available for MSNM after the AIM protocol fiasco, a screen popped up in MNSM that allowed me to download the fixed version. This is the same idea as their Windows Update Notification program. With WUN, bundling, and promoting MSNM on MSN, getting MSNM to desktops is a small problem. This makes MSNM extremely more versatile. Hypothetically, if AIM were compatible with MSNM, all umpteen million users would need to manually go to the web site on their own initiative and download the new version. AOL clearly was sideswiped by not including this feature, even though automatic update features are very convenient and popular (RealPlayer, WinAMP, Windows 98, virus scanners, etc). Of course, some people may dislike windows for the constant updates it "needs", but it is my understanding that Linux users download patches and whatnot quite frequently also. Also, I agree that MS's forte might not be innovating, but they are excellent integrators, and that is very important in making a product with a low learning curve. And that's capitalism, Aaron MSNM vs AIM? Oy vey (Score:2) Why doesn't everyone just use ytalk? (ok, ok, so this really doesn't help the discussion. sorry. Review of Microsoft Messenger Service (Score:3)
https://slashdot.org/story/99/07/24/0110215/microsoft-and-aol-fight-over-instant-messaging
CC-MAIN-2016-50
refinedweb
6,973
73.78
I was lazy about this one. The code is borrowed from wiki: def look_and_say(member): while True: yield member breakpoints = ([0] + [i for i in range(1, len(member)) if member[i - 1] != member[i]] + [len(member)]) groups = [member[breakpoints[i - 1]:breakpoints[i]] for i in range(1, len(breakpoints))] member = ''.join(str(len(group)) + group[0] for group in groups) # Print the 10-element sequence beginning with "1" sequence = look_and_say("1") # for i in range(10): # print sequence.next() # The above 2 lines are the only codes I modified as below: for i in range(31): print len(sequence.next()) >>>execfile('pc10.py') 1 2 2 4 6 6 8 10 14 20 26 34 46 62 78 102 134 176 226 302 408 528 678 904 1182 1540 2012 2606 3410 4462 5808 Advertisements
https://geekwardnote.wordpress.com/2013/02/01/python-challenge-level-10-solution/
CC-MAIN-2017-43
refinedweb
137
77.87
5.29: Creating a New Puzzle - Page ID - 14524 def generateNewPuzzle(numSlides): # From a starting configuration, make numSlides number of moves (and # animate these moves). sequence = [] board = getStartingBoard() drawBoard(board, '') pygame.display.update() pygame.time.wait(500) # pause 500 milliseconds for effect lastMove = None for i in range(numSlides): move = getRandomMove(board, lastMove) slideAnimation(board, move, 'Generating new puzzle...', animationSpeed=int(TILESIZE / 3)) makeMove(board, move) sequence.append(move) lastMove = move return (board, sequence) The generateNewPuzzle() function will be called at the start of each new game. It will create a new board data structure by calling getStartingBoard() and then randomly scramble it. The first few lines of generateNewPuzzle() get the board and then draw it to the screen (freezing for half a second to let the player see the fresh board for a moment). The numSlides parameter will show tell the function how many of these random moves to make. The code for doing a random move is the getRandomMove() call on line 11 [305] to get the move itself, then call slideAnimation() to perform the animation on the screen. Because doing the slide animation does not actually update the board data structure, we update the board by calling makeMove() on line 13 [307]. We need to keep track of each of the random moves that was made so that the player can click the "Solve" button later and have the program undo all these random moves. (The "Being Smart By Using Stupid Code" section talks about why and how we do this.) So the move is appended to the list of moves in sequence on line 14 [308]. Then we store the random move in a variable called lastMove which will be passed to getRandomMove() on the next iteration. This prevents the next random move from undoing the random move we just performed. All of this needs to happen numSlides number of times, so we put lines 11 [305] to 15 [309] inside a for loop. When the board is done being scrambled, then we return the board data structure and also the list of the random moves made on it.
https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/05%3A_Slide_Puzzle/5.29%3A_Creating_a_New_Puzzle
CC-MAIN-2021-31
refinedweb
353
59.84
These Lights look incredible thank you so much for source. When the new haxeflixel version comes out you should really try giving your assets some normal maps your game would look even more incredible with lights then. Posts made by PXshadowHF - RE: Super Cute Alien - A cute platformer, inspired on BattleBlock Theater, Super Meat Boy and others! - with screen grabbing trying to figure out if there is a work around. That pretty much sums it up and if any one's curious I'm currently working on Infinite scrolling where post assets are constantly reused so there is only ever 6 posts on the screen and no create calls are called after initial load which is the dream for it to fully work. Have a great day everyone with your dev work, Cheers. -. - RE: Updating FlxText. It would be buttonname.text = "Set text"; and in order to format code on the forum you use 3 " ``` " at the top and bottom of the code example: .``` Code here `` <- extra " ` " and that key to make that charachter is right above tab left side of keyboard - RE: Updating FlxText. @vulpicula yep it does :) - RE: Updating FlxText. In the update function text2.text = "Button presses: [" + statPointsRemaining + "]."; - RE: visible area of scaled stage import openfl.system.Capabilities; And grabbing screen resolution Capabilities.screenResolutionX Capabilities.screenResolutionY That should give you the applications screen width and height :D - RE: ThinQbator - problem solving / invention platform social media. - RE: ThinQbator - problem solving / invention platform social media); } - RE: Several questions about HTML5 development. - RE: Several questions about HTML5 development. - RE: ThinQbator - problem solving / invention platform social media. - Thin :) - RE: How to stop HTML5 from freezing when I lose focus? Possibly try FlxG.autoPause = false;
http://forum.haxeflixel.com/user/pxshadowhf/posts
CC-MAIN-2017-51
refinedweb
283
58.28
Using Ubuntu to cross compile for Raspberry Pi The Raspberry Pi has limited resources compared to a regular desktop PC. Development and compilation on the Raspberry Pi tends to be quite slow. In this tutorial I'm going to show to you how use Ubuntu 12.04 LTS to compile for raspberry pi running raspbian wheezy. The toolchain I use is the one prepared by RTI () for 32 and 64 bit Linux environment. First I installed Ubuntu 12.04 into a virtual machine. After the reboot I installed all the updates and the I rebooted the virtual machine. Then I downloaded and extracted the prepared toolchain into a directory named toolchain: $ mkdir -p /home/myuser/toolchain $ cd /home/myuser/toolchain $ wget $ tar xvzf raspbian-toolchain-gcc-4.7.2-linux32.tar.gz if you are using a 64-bit linux you have to do $ wget $ tar xvzf raspbian-toolchain-gcc-4.7.2-linux64.tar.gz last I exported the path to the toolchain binaries: $ export PATH=/home/myuser/toolchain/raspbian-toolchain-gcc-4.7.2-linux32/bin:$PATH This command shall be issued every time you'll reboot your linux box, so I think it is better to add this line into .bashrc or .profile configuration files. At this point I was ready to compile for raspbian-wheezy A simple helloworld could help.. #include <stdio.h> #include <stdlib.h> int main(int argc, char **argv) { printf("Hello world\n"); } to compile for linux x86 you have to issue the following command: $ cc -o main_x86 main.c to compile for raspbian you have to use the arm-linux-gnueabihf-gcc command, simply substitute the cc command with arm-linux-gnueabihf-gcc: $ arm-linux-gnueabihf-gcc -o main_rasp main.c the file utility could help us to get some info $ file main_x86 main_rasp main_x86: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=0x746ec52e8c892d5fd72418892b620bb9dbb7bf62, not stripped main_rasp: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, not stripped now you have only to move the main_rasp executable into your Raspberry Pi box. Note that we have used dynamic linking. To use static linking you have to add the -static option to the compiling command: $ arm-linux-gnueabihf-gcc -o main_rasp main.c -static Gg1 Great, works flawlessly on ubuntu 12.10, apart from 2 modifications: sudo apt-get install gcc-arm-linux-gnueabihf and changing all instances of myuser to $USER Just need to get me a Raspberry Pi now 🙂 Regards Greg Commenting on here must have satiated the Gods – I have just won one of the fabled Blue Raspberry Pis! #bluepi what a lucky man! Did you take part to a contest? I would like get audacious working on pi .Right now there are some drivers missing. Is there anyway to convert the latest to Raspbian for the raspberry pi? Thank you in advance for any help in that sector. Dart Gar
http://www.xappsoftware.com/wordpress/2013/02/07/using-ubuntu-to-cross-compile-for-raspberry-pi/
CC-MAIN-2018-05
refinedweb
507
65.22
Query Odata Feed Hi, is it possible to query a odata feed using Ext.data.Store ? I keep getting an parse error. Anyone knows how to do it ? thanks - Join Date - Mar 2007 - Location - Gainesville, FL - 38,623 - Vote Rating - 1138 More than likely. What does the feed look like? Got a fast reply. For now its an internal odata feed. But its prety much like this one : if you can show me for example how to load the Customers (json format) to a data store to use it on a list would be awsome ! Thanks in advance! Query For querying you don't need much, the problem arises when you want to write to the store but, if it is just for reading you just need to configure store like this: var store = new Ext.data.JsonStore({ root: 'd', url: '', fields:[...], ... }); This works as expected, i'm using it, i'm also working on an "OdataStore" (and corresponding OdataProxy) to support writing but is not finnished yet. I hope that helps a bit to clarify things. We did some work on querying an OData store back with ExtJS 3.0 and ADO.NET Data Services 1.5. I hope to update this to work with ExtJS 4.0 and WCF Data Services at some point, but haven't had a chance to work on this yet. One of my co-workers posted some information on the approach we took previously, which may be of value. Jeremy Odata I plan on implementing this in the next month or so. If anyone else is working on this, please let me know. I would rather not reinvent the wheel but it will be good practice for me if anything. Any suggestions would be greatly appreciated. I think the trick will be updating I believe. If i'm successful, i'll post the link here. I'm also interestead into it for Ext 4.0.2 - any progress so far? Bump! Im also interested in this topic, anything happend this year yet? I need to build front-end applications for WCF Data Services web services using the OData protocol. Is there any progress on a compatible store for this with CRUD support? Please see this very basic implementation of OData Protocol which WCF Data Services use. This Proxy support CRUD operations and sorting. PHP Code: /* Open Data Protocol Implementation for Ext Js 4 Links: ExtJs 4 Server Proxy: ExtJs 4 Ajax Proxy: OData URI Conventions: License: GNU General Public License v3 Author: Oleg Dolzhansky dolzhansky@gmail.com Example: Ext.define('Customer', { extend: 'Ext.data.Model', fields: [ {name: 'id', type: 'int', defaultValue: 0}, {name: 'name', type: 'string'} ], idProperty: 'id' }); var CustomersStore = Ext.create('Ext.data.Store', { model: 'Customer', proxy: { type: 'odata', url : '/Service.svc/Customers' }, autoLoad: true }); */ Ext.define('Ext.ux.data.proxy.OData', { extend: 'Ext.data.proxy.Ajax', alternateClassName: 'Ext.ux.data.ProxyOData', alias: 'proxy.odata', /* Builds URL in the form Entity(Id), for example */ buildUrl: function (request) { var me = this, operation = request.operation, records = operation.records || [], record = records[0], url = me.getUrl(request), id = record ? record.getId() : operation.id; if (id) { if (url.match(/\/$/)) { url = url.substring(0, url.length - 1); } url = url + '(' + id + ')'; } request.url = url; return me.callParent(arguments); }, /* Returns a string of comma-separated fields from sortes with optional 'desc' directive */ encodeSorters: function (sorters) { var min = [], length = sorters.length, i = 0; for (; i < length; i++) { min[i] = sorters[i].property; if (sorters[i].direction.toLowerCase() == 'desc') { min[i] += ' desc'; } } return min.join(','); } }, function () { Ext.apply(this.prototype, { actionMethods: { create: 'POST', read: 'GET', update: 'PUT', destroy: 'DELETE' }, reader: { type: 'json', root: 'd' }, headers: { 'Accept': 'application/json' }, pageParam: undefined, startParam: '$skip', limitParam: '$top', sortParam: '$orderby', noCache: false }); }); Last edited by Oleg Dolzhansky; 15 Nov 2011 at 6:46 PM. Reason: Corrected namespace Similar Threads Reading a OData feed into Ext-JSBy pkaur in forum Ext 3.x: Help & DiscussionReplies: 1Last Post: 21 Mar 2011, 9:26 AM - Replies: 5Last Post: 21 Mar 2011, 9:26 AM Refresh feed list(feed Panel) in feedviewer examplesBy adelphus in forum Ext 2.x: Help & DiscussionReplies: 1Last Post: 6 Nov 2008, 10:33 PM Updating Title text Of a Feed on Feed panelBy adelphus in forum Ext 2.x: Help & DiscussionReplies: 0Last Post: 17 Jul 2008, 11:40 PM
https://www.sencha.com/forum/showthread.php?127046-Query-Odata-Feed
CC-MAIN-2015-40
refinedweb
714
60.61
Message-ID: <110027550.6633.1634388971980.JavaMail.tomcat@d7e90b4c3fee> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_6632_665960302.1634388971979" ------=_Part_6632_665960302.1634388971979 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: =20=20 =20 =20 =20=20 =20 =20 loc= ation where the Joomla! codebase is installed. In my case, it is = the root of the project I'm working on. It's also worth clicking the second= prompt that asks you if you want to detect the namespace roots; this saves= us having to configure the paths for the project manually. If you don't ge= t prompted by the Joomla! Support prompt, you can enable Joomla! support ma= nually by opening the preference pane, and navigating to Languages &= ; Frameworks then PHP and Joomla! Support.= Once we've enabled Joomla! support, PhpStorm asks if we would like to en= able the Joomla! code styles and if we would like to enable the Joomla! doc= block templates. Joomla! has its own rigorous code style and docblocks= , so enabling these is always a good idea; we'll learn more about these lat= er. Creating a New Joom= la Module/Plugin/Component/Empty Joomla! Project To create a new Joomla project, go to File then = New Project... and select Joomla! Integration from t= he left-hand options. You'll need to tell PhpStorm where to find your Jooml= a! installation path (unless you've already specified it at the previous st= ep),.<= /p> With Joomla! support enabled and PhpStorm knowing where your Joomla! ins= tall is, the path to this Joomla! install should be included in your includ= e paths by default. You can check this by selecting External Libraries = in the left-hand project browser, and check that you can see = joomla library root under PHP. If you don't see this, yo= u'll need to add the path to your Joomla! install to your project manually = in Settings | PHP | Include path. PhpStorm ships with a coding s= tandard for Joomla! code style, and you should be asked if you want to enab= le this when you enable Joomla! support. If you wish to enable the Joomla! = code style manually, browse to Editor, Code Style&nbs= p;then PHP in the preference pane. There, you c= an add the pre-defined Joomla! styles by clicking Set From...= , and selecting Predefined Style then Joomla!= em> = The latest versions of PhpStorm come = with support for the ` _` Holding CMD/CTRL and clicking = the string takes us straight to the `token` static method of the `JHtmlForm= ` class just as we would expect. Of course, the `JHtml` class comes with fu= ll code completion and type hinting like all classes do in PhpStorm.= Similarly to JHtml = ;support, PhpStorm now also supports the static methods of the `JText` class. Text allows you to handle translation= s from definitions (typically in .ini files) and output the translation in = the correct place. When you use the `_` or `sprintf` or 'script' metho= ds of `JText` and pass in a valid string with a key name, PhpStorm will all= ow you to use Brief Info and navigate to the definition (usin= g CMD/CTRL and click) to be taken directly to the .ini file that defined th= at key. PhpStorm comes with a database browser built right in, and the Joom= la! integrations allow us to quickly and easily configure the database tool= from the configuration file that Joomla! creates containing our credential= s. Once we've opened the Database tool (I usually hover over the menu = icon in the bottom left of the screen and select Database), we can add a ne= w data source for our Joomla! database by clicking the += icon and then selecting Import from sources... We then see the Add New Datasource dialog, but with a= ll the fields completed from the settings in the `configuration.php` file. = Simply click Test Connections to check everything is working,= and then click OK. You can see that Joomla! adds a pr= efix to the database tables (which is generated or configured during the in= stallation wizard), and this can make writing queries in the query editor (= part of the database tool) quite painful. PhpStorm provides database prefixes support and changes #__ on fly to = the prefix that is defined in the $dbprefix field = in the configuration.php<= /span> file. You will also have = completion working for your SQL queries.To have it working, make sure that = SQL Dialect for project is equal to D= atabase (Settings-> SQL Dialects). If you Ctrl+Click, you can navigate directly to the table from the query= . = Joomla! Doc-blocks standard support Joomla! code styles have strict standards about docblocks - including which docblo= ck annotations are required, which are optional, and which order they shoul= d be presented in. PhpStorm now ships with an inspection that will tell you= exactly what is wrong with your docblocks, and why they don't meet Joomla!= strict standards. Joomla! provides a ruleset for use with PHP CodeSniffer, and PhpStorm co= mes with support for PHP CodeSniffer out of the box. Refer to the PHP Code Sniffer in P= hpStorm tutorial with the Joomla! CodeSn= iffer standards to add more inspections for code style within PhpStorm.= Joomla!-based projects can be debu= gged and profiled without any Joomla!-specific configuration. Please proceed with standard PhpS= torm debugging or profiling workflow. For additional details on debugger an= d profiler configuration, check this tutorial or relevant videos on the = video tutorials page . Download latest version of PhpS= torm for your platform right now >>
https://confluence.jetbrains.com/exportword?pageId=78840347
CC-MAIN-2021-43
refinedweb
935
66.13
I. You've been kicked (a good thing) - Trackback from DotNetKicks.com WTF does your code have to do with MVC? It looks like Identity Map to me. Joe, It had a model (CacheManager) and controllers that use the CacheManager. It's sorta like MVC... not really... but I like to think of it that way. This has potential in-rush issues in a web farm scenerio. Refactoring your ReviewCache class to use locks and class variables would solve the problem. @Chuck Yea... in my clients site, I did use locks. I wanted to keep it simple for this blog post though. The biggest problem with cache usage is indeed that when using webfarms you can not store user dependant data. It actually should never store such information. However it can be very usefull in lots of other cases to increase performance drastically. A second note: I agree with Joe. This manager has nothing to do with MVC. Yes, you can use it in a MVC application, but let's be fair, you can use about a lot of stuff inside a mvc application... OK... who uses a webfarm other than enterprises? MOST people use a shared or 1 dedicates/virtual server. The Cache object is optimal for those people. If you want a webfarm sort of caching scenario, USE ENTERPRISE LIBRARY! I KNOW THIS HAS NOTHING TO DO WITH MVC!!! BUT THIS FOLLOWS THE PATTERN LOOSLY! I should have chosen my words better... whatever... give it a rest people. Don't get me wrong Zack. Your manager is definatly very valuable for cache abstraction. I actually use a very similair implementation for it. Hey um, give the dude a break already! I don't know this guy from Adam, but it's pretty clear he's trying to help people out with the fruit of his labor, no need to get all Britney Spears catty on him over scenarios he never claimed to address or semantics. I'm sure this code is very useful for some folks and a good starting place for others. Keep it up man. I have to agree with Mike. There are way too many people who read blogs just to blast whatever the author puts on up in an effort to assist. I wish there was way to enforce the old saying; "If you don't have anything good to say, then just do say anything". Just my 2 cents. Is this from Gavin or Zack? Anyways, I like the wrapper, but I'd prefer a more OO approach, with interfaces and implementors, it would be cleaner. @Simone This would be written by me, inspired from Gavin. I could have use interfaces... but we should keep it simple! Pingback from Hide and seek with the ASP.NET cache < Trying This Again Great article. I stumbled on it after impl'ing my own and realizing that someone has probably already done this. I added one nicety to mine that I don't see here. A delegate called "OnGetNewData" for the cachemgr that will fire when the internal cache returns NULL for a lookup. This allows the caller to subscribe to the event and handle the actual datafetch, then return back to the cachemgr for proper storage. Zach. I know where you are coming from. You learn something new, create a class to make integration easy, then you tell the world about it. Only to have people tell you that its old stuff. You have good intentions. Keep it up! I remembered when I discovered the asp.net Cache Object. It was like I discovered a new religion, very exciting times. Your coding skills are a bit advanced to mine. I tend to just stick to simple functions. Here is how I do stuff in my helper functions. <pre><code> // add item to cache public void addToCache(string cachename, string data, int hours){ Cache.Insert(cachename, data, null, DateTime.Now.AddHours(hours), System.Web.Caching.Cache.NoSlidingExpiration); } // get item from cache public string getFromCache(string cachename){ return (string)Cache[cachename]; // in action with another helper function public string getBlogName(string blogid){ string cachename="getBlogName_"+blogid; if(getFromCache(cachename)!=null){return getFromCache(cachename);} string data=countDB3("SELECT title FROM blogs where id='"+blogid+"'"); addToCache(cachename, data, 12); return data; } </code></pre> This was EXTREMELY helpful to me today! @Dave YAY! That's awesome! Hi Zack, Congrats for such an informative article. I too have a blog regarding ASP.NET Cache, it various uses, limitations and how to fix them. You are absolutely correct. Caching is an awesome way of speeding up apps. This is a concise, informative article. It turned up in my search for Cache information and convinced me to implement such a class in my project. I'll be editing it somewhat to give the option of passing a CacheDependency object. Thanks for sharing! Hi Zack! Nice post. I must point out however that ASP.NET Cache has some very intrinsic problems. Its not scalable for large server farms and its In-Process nature isn't reliable either.The right way to solve this scalability problem is through an in-memory distributed cache. Microsoft is finally realizing it. They're working on Velocity but that is still in its infancy and will take some time to stabilize and mature. Thanks for the awesome code sample! I found one minor thing wrong with your C# example - you have to use brackets and not parens for accessing the cache like "HttpContext.Current.Cache[this.CacheKey]" instead of "HttpContext.Current.Cache(this.CacheKey)". Also, I was getting some NullReferenceExceptions in that method, so I modified it like so and it works nicely: public T Grab() { T temp; try { temp = (T)HttpContext.Current.Cache[this.CacheKey]; } catch (NullReferenceException) temp = default(T); return temp; } @Eric Yea... i was converting from VB to C# and didn't catch that. I'm not that much of a noob anymore :) I disagree with you on the NullReferenceException. If the value is null in the cache, why return something? The manager class is just an API that delegates the functionality of the HttpCache. Well, you can feel free to disagree, but I had to do something because the exception was crashing my app :) For me this works, as I can test for the returned value safely now... Just sayin that its not necessarily wrong to return null. Guess its a choice rather a black and white situation.
http://weblogs.asp.net/zowens/archive/2007/10/20/easier-way-to-manage-your-asp-net-cache.aspx
crawl-002
refinedweb
1,073
68.36
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Where is defined such a modal dialog view? Hello friends, I'm not able to find the definition of this popup windows. Is it in the database? In a .py file? Thanks I want to see the code who constructs the « Use template » section in the bottom at right. My goal is really to change the default value of « Use template » field. In ir.ui.view, I have found sale.order.form : <button name="action_quotation_send" string="Send by Email" type="object" states="sent,progress,manual" groups="base.group_user"/> In the file addons/sale/sale.py, we have the definition of action_quotation_send : def action_quotation_send(self, cr, uid, ids, context=None): ''' This function opens a window to compose an email, with the edi sale template message loaded by default ''' assert len(ids) == 1, 'This option should only be used for a single id at a time.' ir_model_data = self.pool.get('ir.model.data') try: template_id = ir_model_data.get_object_reference(cr, uid, 'sale', 'email_template_edi_sale')[1] except ValueError: template_id = False try: compose_form_id = ir_model_data.get_object_reference(cr, uid, 'mail', 'email_compose_message_wizard_form')[1] except ValueError: compose_form_id = False ctx = dict(), } Yes you are right Anand. Hi Pascal, this will open the window return { 'type': 'ir.actions.act_window', 'view_type': 'form', 'view_mode': 'form', 'res_model': 'mail.compose.message', 'views': [(compose_form_id, 'form')], 'view_id': compose_form_id, 'target': 'new', 'context': ctx, } Thanks. I have already found this in the file addons/sale/sale.py file. I don't understand where is stocked the view. Does this view exist? I was searching for « compose_for_id » or « mail.compose.message », but I don't find it in the database... yes.. there is a database named "mail.compose.message"... and but there is no data.. actually if u read it fully.. u may see compose_form_id is a result of filtering email template..... so a temporary view is created and message is send... I analyze all of this. Come back soon. Thanks Thanks An
https://www.odoo.com/forum/help-1/question/where-is-defined-such-a-modal-dialog-view-71568
CC-MAIN-2017-09
refinedweb
340
61.53
Create a Redis Cache in Your Azure Account You will learn - How to create a Redis Cache - How to create a service principal The service broker uses Redis as a backing service, so you’ll have to set up a Redis Cache. The easiest way to do this is using the Azure Cloud Shell you have initially configured. Verify if there is already a so-called resource provider Microsoft.Cache by issuing the following command: az provider show -n Microsoft.Cache -o table If you get a Not Registered as RegistrationState, you don’t have a resource provider. If this is the case, go on and register it: az provider register --namespace Microsoft.Cache This might take some time. You can check the process by running the provider show command in the Azure Cloud Shell. az provider show -n Microsoft.Cache -o table In Azure, you group resources in so-called resource groups. The first thing to do is to create a resource group. Call the resource group SAPCloudPlatform and locate it in West Europe. az group create -l westeurope -n SAPCloudPlatform It’s time now to create the backing service for the service broker in the form of a Redis Cache. Execute the following command in the Azure Cloud Shell: Replace <unique-cache-name>with a globally unique technical name for the Redis Cache of your choice. az redis create -n <unique-cache-name> -g SAPCloudPlatform -l WestEurope --sku Standard --vm-size C1 Once you have deployed the service broker to your Cloud Foundry space, a so-called service principal is necessary to let your service broker running on SAP Cloud Platform provision resources in Azure. This is usually done by an account owner or administrator and not done by any developer. Create a service principal by executing the following command in the Azure Cloud Shell: az ad sp create-for-rbac IMPORTANT: Take a screenshot of the output in the Azure Cloud Shell or copy the information to a text document as you will need the attribute values in the next tutorial. TROUBLESHOOTING: If your user doesn’t have sufficient privileges to execute this command, please contact your administrator.
https://developers.sap.com/tutorials/cp-osba-azure-redis-cache.html
CC-MAIN-2020-40
refinedweb
361
60.65
ChattieChattie A Python bot framework inspired by Hubot. How do I make my own bot using this?How do I make my own bot using this? First install chattie using pip3 install chattie you then will have access to the chattie cli which generates and runs bots. Next you can create a new bot using chattie new my_bot_name this will create a new directory with the bot name and generate a few files to help you Chattie comes with 3 connectors at this time and I'm constantly trying to add more: - Matrix: pip3 install chattie[matrix] - Telegram: pip3 install chattie[telegram] - Terminal: A REPL you can use for testing your bot! From there you can start by adding tricks and handlers or just running the bot with the default connectors! Core ConceptsCore Concepts Chattie has 4 core concepts around which it's built: - Handlers -- Receive all non-command messages in a room - Tricks -- Things that Chattie bots can do - Commands -- Trigger words for tricks - Connectors -- Let Chattie bots talk to different services Tricks, Commands, and HandlersTricks, Commands, and Handlers Tricks and handlers are just functions which take two arguments, the current instance of the chattie.Bot class and the text of the incoming message as an array split on spaces. For example: # If we recieve the message: "chattie my_new_trick some stuff" def my_new_trick(bot, msg): print(msg) # prints ['my_new_trick', 'some', 'stuff'] print(bot) # prints info about the currently running bot instance return "" # responds to the chat room with whatever string is # returned here Handlers follow the exact same signature however they can optionally return None which will send nothing back to the chat room. This is useful for things like audit logging or catching inside jokes! All tricks will automatically be added to Chattie's help command and if the trick has a docstring it will be included in the help output prettified for the users viewing pleasure. So Chattie can pick up your new tricks you have to assign them to commands, the way you do this is to have a global variable named commands in your module that is a dictionary of trigger words to tricks. For our example above it would look like this: commands = { 'my_new_trick': my_new_trick } What's cool is you can assign multiple commands to the same trick: commands = { 'my_new_trick': my_new_trick, 'new_trick': my_new_trick } Chattie when initialized will automatically pull this variable in and add it to it's known commands. Handlers operate much the same way, but since there is no trigger word for a handler you simply export an array of the handlers you want to register in the global variable handlers: handlers = [ a_new_handler, some_other_handler ] Chattie will take care of the rest! How do I add new tricks and handlers?How do I add new tricks and handlers? There are two ways to write new tricks and handlers, if you think your tricks or handlers will be useful to a wider audience then you can either submit them as a PR to this repo or you can create a Python package using setuptools and entry_points. If you're unsure of what that means you can go here for an explanation of setuptools and entry_points or look in the examples directory where I have a few example packages setup for you to reference. Additionally when you create a bot using chattie new bot_name there will be two Python modules created for you named tricks, and handlers. You can write any tricks or handlers that are local to your bot in these modules and as long as they are exported as described above Chattie will pick those up when that bot is running. Persistent storage for tricks and handlersPersistent storage for tricks and handlers The final concern you may have is how to maintain some state on your bot? For example registered rooms for a given handler, to make this simple Chattie maintains an internal dictionary called inventory. You can always access this directly if you so desire (as you are passed the bots instance as the first argument). But even better is to use the get and set methods provided by Chattie this will auto save the inventory on updates. Chattie also auto loads the inventory from disk on start. Make sure whatever you're storing is JSON serializable by the stdlib json module. An example of inventory usage would be this handler: def arch_linux_counter(bot, msg): text = ' '.join(msg) if 'arch' in msg.lower() and 'linux' in msg.lower(): counter = bot.get('arch_linux_counter') counter += 1 bot.set('arch_linux_counter', counter) # Send a congratulatory message for extolling the virtues of # the Arch Linux Master Race! return "Congratulations on talking about Arch Linux! This is" " the %dth time it's been talked about around this bot!" % counter # Don't send a message return None handlers = [ arch_linux_counter ] ConnectorsConnectors The final concept used in Chattie is that of the Connector. A Connector is a class which implements an interface that allows Chattie to connect to a given backend. Often this is a chat service but can also be any text stream such as stdin/stdout or even Twitter! The Connector class definition varies wildly based on the backend it's connecting to but the interface it must implement is: class Connector: """The base interface that must be implemented by a backend.""" def __init__(self, parser): """Parser is the parse_message function of the Bot class. It should be passed the room_id (whatever form that takes) and the plain text of the incoming message from the service. """ self.parser = parser def listen(self): """Should connect and listen to incoming messages from the backend. When an incoming message is parsed should send to self.parser (the Bot class' parse_message method) """ pass def send_message(self, room_id, msg): """Send the msg to room_id. msg is always a plain string and room_id is whatever is passed to the Bot classes parse_message method as the room_id. """ pass For example connectors you can see the included connectors here The Connector class must be globally exported from the base import path of your module. Just like Tricks and Handlers, Chattie picks up connectors available using entry_points, specifically the chattie.plugins.connectors entry point. This means that you can write your own connectors and distribute them as pypi packages, however if you write a Connector I'd be very happy to include it in the standard distribution so send me a Pull Request! Why the name Chattie?Why the name Chattie? It's based on the movie Chappie, whose main character is a robot who gains emotions and befriends some humans. I thought the pun was worthy. ContributingContributing The basics: - Fork it! 🍴 - Create an issue describing what you're working on. - Create your feature branch: git checkout -b my-new-feature - Commit your changes: git commit -am 'Add some feature' - Push to the branch: git push origin my-new-feature 🔥Submit a pull request :D 🔥 All pull requests should go to the develop branch not master. Thanks! License InfoLicense Info Chattie is distributed under the Apache 2.0 License
http://pythonic.zoomquiet.top/data/20170514125237/index.html
CC-MAIN-2019-22
refinedweb
1,171
59.53
Write a C++ Program to Check Triangle is Equilateral Isosceles or Scalene with an example. If all three sides of a triangle are equal, it is an equilateral triangle. Else if any of the two sides are equal, it is an isosceles triangle. Otherwise, it is a scalene triangle. #include<iostream> using namespace std; int main() { int side1, side2, side3; cout << "\nPlease Enter Three Sides of a Triangle = "; cin >> side1 >> side2 >> side3; if(side1 == side2 && side2 == side3) { cout << "\nThis is an Equilateral Triangle"; } else if(side1 == side2 || side2 == side3 || side1 == side3) { cout << "\nThis is an Isosceles Triangle"; } else cout << "\nThis is a Scalene Triangle"; return 0; } In this C++ output, all three sides are different. So, it is a scalene triangle. Please Enter Three Sides of a Triangle = 30 60 90 This is a Scalene Triangle side1 = 30, side2 = 30, side3 = 120. It means, two sides are equal so isosceles triangle. Please Enter Three Sides of a Triangle = 30 30 120 This is an Isosceles Triangle
https://www.tutorialgateway.org/cpp-program-to-check-triangle-is-equilateral-isosceles-or-scalene/
CC-MAIN-2021-43
refinedweb
166
70.02
Patent application title: Using virtual domain name service (DNS) zones for enterprise content delivery Inventors: Charles J. Neerdaels (Aptos, CA, US) Akamai Technologies, Inc. (Cambridge, MA, US) Assignees: AKAMAI TECHNOLOGIES, INC. IPC8 Class: AG06F1516FI USPC Class: 709203 Class name: Electrical computers and digital processing systems: multicomputer data transferring distributed data processing client/server Publication date: 2013-01-24 Patent application number: 20130024503 Abstract: A domain to be published to an enterprise ECDN is associated with a set of one or more enterprise zones configurable in a hierarchy. When a DNS query arrives for a hostname known to be associated with given content within the control of the ECDN, a DNS server responds by handing back an IP address, by executing a zone referral to a next (lower) level name server in a zone hierarchy, or by CNAMing to another hostname, thereby restarting the lookup procedure. At any level in the zone hierarchy, there is an associated zone server that executes logic that applies the requested hostname against a map. A name query to ECDN-managed content may be serviced in coordination with various sources of distributed network intelligence. Claims: 1. A method of content delivery involving a first entity and a second entity, comprising: deploying a first relay behind an enterprise firewall associated with the first entity, wherein the first entity has a private namespace that is not resolvable through public Internet DNS, the private namespace having one or more enterprise domains associated therewith; delegating to the second entity responsibility for managing a virtual zone associated with the private namespace of the first entity; and in response to receipt of a request associated with the virtual zone, retrieving a private object published by the first entity. 2. The method as described in claim 1 wherein the second entity is a service provider and the first entity is a service provider customer. 3. The method as described in claim 1 wherein the virtual zone is resolvable without reference to public Internet DNS. 4. The method as described in claim 1 further including: building a list of the enterprise domains that are candidates for caching, wherein an enterprise domain that is a candidate for caching has associated therewith a set of one or more IP addresses associated with nearby content servers managed as part of an Internet content delivery network (ICDN) associated with the second entity. 5. The method as described in claim 4 wherein the list of enterprise domains is built based on recorded usage patterns of address record lookups. 6. The method as described in claim 5 further including the step of modifying the list as the recorded usage patterns change. Description: [0001] This application is based on and claims priority from Ser. No. 12/351,870, filed Jan. 12, 2009, now U.S. Pat. No. 8,275,867, which application was based on and claimed priority to Ser. No. 10/466,797, filed Jun. 24, 2004, now U.S. Pat. No. 7,478,148, which application was based on and claimed priority to Ser. No. 60/262,171, filed Jan. 16, 2001. BACKGROUND OF THE INVENTION [0002] 1. Technical Field [0003] This disclosure relates generally to content delivery and management within a private enterprise network. [0004] 2. Description of the Related Art [0005]. [0006] [0007]8] process application from front ends efficiently, in of itself, will be a critical IT problem that current technologies do not addresss.. BRIEF SUMMARY OF THE INVENTION [0009] It is a general object to define and implement one or more virtual zones within an enterprise namespace to facilitate enterprise content delivery behind a corporate firewall. [0010] It is another general object to provide an enterprise content delivery network that coexists with existing or "legacy" Domain Name Service (DNS) infrastructure to facilitate mapping of requests for enterprise resources to surrogate servers, e.g., common caching appliances, application servers, distributed storage and database management systems, and streaming media servers. [0011] It is yet another general object to define and implement one or more so-called "virtual" zones within an enterprise namespace to facilitate content delivery behind a corporate firewall over an enterprise content delivery network (ECDN). [0012] Another object is to enable an ECDN DNS to coexist with an existing enterprise DNS and to enable content delivery with only minimal configuration changes to the existing infrastructure using virtual zones. A given host or sub-domain to be published to the ECDN is aliased into a virtual zone namespace preferably using a DNS Canonical Name (CNAME). [0013] A still further object is to implement an enterprise content delivery network wherein any arbitrary hierarchical namespace can be implemented and wherein each layer of the namespace inherits the namespace above. [0014] According to this disclosure, a domain to be published to an enterprise ECDN is associated (either by static configuration or dynamically) with a set of one or more enterprise zones configurable in a hierarchy. When a DNS query arrives for a hostname known to be associated with given content within the control of the ECDN, a DNS server preferably responds in one of three (3) ways: (a) handing back an IP address, e.g., for an ECDN intelligent node that knows how to obtain the requested content from a surrogate or origin server; (b) executing a zone referral to a next (lower) level name server in a zone hierarchy, or (c) CNAMing to another hostname, thereby essentially restarting the lookup procedure. In the latter case, this new CNAME causes the resolution process to start back at the root and resolve a new path, probably along a different path in the hierarchy. At any particular level in the zone hierarchy, preferably there is an associated zone server. That invention provides for a distributed, dynamic globally load balanced name service. [0015] According to a more specific aspect, to the configured ECDN zone primary server. This primary ECDN zone server associates, and relays via CNAME, a name query with an ECDN intelligent node (or resolves directly to a given content origin server). This association may be based on given conditions within the network, server load conditions, or some combination of various known metrics. [0016] With ECDN virtual zones, maps that associate a given request to a server may be local or global, static or dynamic. While the maps typically are generated within the ECDN infrastructure, they could also be affected be external agents providing hints or modifications. If multiple zones are used in a hierarchy, a first level ECDN zone server directs the name query to a second level ECDN zone server, and so on, until the appropriate response is generated. At any one level the particular zone server need not be able to resolve the entire name; rather, the particular zone server need only be able to resolve the portion required for directing the next level of resolution. For example: the zone server for the namespace containing b in the hostname a.b.c.d.e may have no idea where the final resolution will end up, but it should know which set of servers may be authoritative for the namespace containing c. Because DNS readily allows one to "carve out" zones in the namespace of a partcular domain (e.g., ecdn.company.com can be managed by a different authoritative DNS server than the rest of company.com), the creation of the first level ECDN zone on the existing DNS infrastructure allows the two systems to coexist and undergo modification in parallel without interference with one another. This enables the present invention to be readily integrated into an existing or legacy DNS solution with minimal reconfiguration. [0017] Thus, according to this disclosure, an existing DNS infrastructure is augmented with one or more ECDN zone servers, and requests for content to be delivered over the ECDN are resolved through the one or more zone servers. In one intelligent node. The client passes a host header to the ECDN node to identify the actual host. 19] FIG. 1 is a block diagram of an enterprise in which the inventive enterprise content delivery network (ECDN) may be implemented; [0020] FIG. 2 is another enterprise environment in which the present invention may be implemented; [0021] FIG. 3 illustrates a known technique for handling a request for content in an enterprise using a legacy DNS infrastructure; [0022] FIG. 4 illustrates a technique for enterprise content delivery using virtual DNS zones according to the present invention; [0023] FIG. 5 illustrates a representative DNS namespace in which an enterprise virtual zone is created according to the present invention; [0024] FIG. 6 illustrates a zone hierarchy in which first and second level ECDN zone servers are used to resolve a name query to an enterprise domain that has published to the ECDN; and [0025] FIG. 7 illustrates a technique for dynamic zone creation through an agent-hinted CNAME according to the present invention; and [0026] FIG. 8 illustrates a technique for using virtual zones to enable an enterprise partner to access an enterprise ECDN to retrieve intranet content. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT [0027] The disclosed subject matter is implemented within an "enterprise" or "enterprise environment." Referring to FIG. 1, a representative enterprise environment in which the present invention is implemented comprises at least one primary central office 100 (e.g., with central IT support), which is connected to one or more regional datacenters 102, with each datacenter connected to one or more remote branch offices 104 via private line, or via the public Internet (most likely with a virtual private network or "VPN"). The branch offices 104 may be fully (or partially) meshed amongst themselves. FIG. 2 illustrates another enterprise computing environment that includes a central office 200 and a pair of remote offices 202. As shown in FIG. 2, remote office 202a is connected to the central office 200 over a private line, which refers to a line not generally routable over the public Internet (e.g., frame relay, satellite link, microwave link, or the like), and remote office 202b is connected to the central office 200 over a virtual private network (VPN) typically over the public Internet, or in other known ways. A given remote office may be an office of an enterprise business partner if an appropriate business relationship between the enterprise and the partner exists. [0028] Enterprise networks are generally based on one one of two network types--point-to-point private/leased line, or VPN over public lines (often fully meshed between offices). From a topological view, actual enterprise networks may not operate in a hierarchical sense for connectivity. From a logical management view, however, the servers remain in a relatively weak hierarchy (as illustrated in either FIG. 1 or 2) with more valuable and management intensive systems usually exisiting at more centralized data centers, and very few of these types of systems being deployed in remote branched offices. This architecture is usually a function of where information and management reside, which drives the cost of management for the overall IT network and IT services. Thus, for example, in the more complex enterprise such as shown in FIG. 1, regional datacenters tend to have more critical information than branch offices, and the central office tends to have the most critical information. It is assumed for purposes of illustration that the enterprise hosts given internal content that it desires to have stored, cached and delivered to end users in the enterprise. Such internal enterprise content, or ECDN content, generally is non publicly-available content (in whatever format) that an enterprise desires to make available to permitted users within the enterprise or within a third party partner of the enterprise. An end user typically operates a computer having an Internet browser. The authoritative content and application servers 106 (in FIGS. 1) and 206 (in FIG. 2) being published into the CDN (each identified here as an "ECDN server") can exist in any location, although from a practical perspective they may often be somewhat centrally-located. Such servers are sometimes referred to an origin servers. The enterprise is also assumed to be IP based and to have a Domain Name Service (DNS) 108 (in FIGS. 1) and 208 (in FIG. 2). From a security standpoint, the enterprise network manager roughly divides the world of the network into trusted and un-trusted, which usually corresponds to internal and external entities. Firewalls 110 (in FIGS. 1) and 210 (in FIG. 2) are typically configured to allow most outbound traffic, with severe restrictions on inbound traffic. This allows for access to requested services without providing third parties a foothold for intrusion. More sophisticated systems usually create an security entity called a DMZ, which can be thought of as a set of two firewalls, with certain assets like email, DNS, web servers, etc. sitting between them. Each firewall has a different set of filtering rules, with the innermost generally allowing valid traffic from a host within the DMZ to enter the enterprise. Generally, no externally initiated traffic is allowed to pass both sets of filters. [0029] For purposes of illustration, an enterprise content delivery network (ECDN) is provisioned into the enterprise network topology of FIG. 2. The ECDN may be implemented in a standalone manner, or it may be part of a larger Internet CDN, in which case the Internet CDN service provider may manage the ECDN on behalf of the enterprise. Various techniques for extending an ICDN into an enterprise are described in U.S. Pat. No. 7,096,266, which is commonly-owned and which is incorporated herein by reference. Referring back to FIG. 2, ECDN functionality is provided by locating one or more intelligent nodes (each referred to as an IN) that execute appropriate management, provisioning, monitoring and routing applications to facilitate the distribution, delivery and management of the private enterprise content delivery network. Thus, for example, an IN 212a may be located in the central office 200 and each of the remote offices may include an IN box 212b as indicated. An IN box 212 is typically a computer, namely, a server, having at least one Pentium-class processor, an operating system (e.g., Linux, Windows NT, Windows 2000, or the like), and some amount of disk storage and system memory. Using the IN boxes, the enterprise (or the CDNSP on its behalf) has the ability to map and load balance users inside the enterprise (perhaps as part of the global ICDN), to fetch content from inside the firewall, to collect log information, to deploy software, to distribute live streams, and to provide other ICDN functionality. [0030] Thus, an illustrative enterprise CDN ("ECDN") is comprised of a variety of origin and surrogate servers and a distributed set of intelligence nodes (IN). An origin server can be any server that is intended to respond to a user request for information and includes, for example, common caching appliances, application servers, distributed storage and database management systems, and streaming media servers. The intelligence nodes tie the entire set of distributed nodes and services into a single, virtual service cloud. The ECDN intelligence nodes may have different configurations. Thus, for example, a first type (e.g., IN 212a in central office 200) may be sized (in terms of processing power and memory) for the largest datacenters, and such machines would be aggregration points for monitoring, logging, management and request routing. A second type of intelligence node (e.g., IN 212b) may be sized for branch office deployment, and such servers would be sized to optimize local behavior. The nodes may be placed in a hierarchy with an IN of the first type acting as a parent. An intelligence node of the second type would typically include a caching engine. [0031] Architecturally, an ECDN such as illustrated above is similar to an Internet content delivery network (ICDN) and comprises: HTTP, streaming and DNS server nodes deployed across a customer network. Generalizing, the ECDN provides the enterprise with the following: a namespace and an associated request routing mechanism (to control how to navigate the CDN), storage, caching and streaming (to control how and where are objects stored and distributed), optionally an application environment (to facilitate application extensibility), management and provisioning (to control how the CDN is deployed and managed), and reporting/auditing (to provide visibility into the behavior). An ECDN such as described above provides an application, streaming and content delivery platform so that enterprise content may be delivered and managed in a decentralized fashion while providing a tightly-integrated solution with a customer's storage, network, application and management solutions. [0032] Enterprises today either outsource their Domain Name Service (DNS) or, as described above, run a DNS infrastructure (e.g., one or more DNS servers). A DNS namespace has certain characteristics, as are well-known. Each unit of data in a DNS distributed database is indexed by a hostname. These names are simply leaves in a tree representation, which is often called a domain namespace. Each node in the tree is labeled with a simple name. The fully qualified hostname of any node in the tree is the sequence of lables on the path from that node to the root. A zone is a subtree of the domain name space. Names at the leaves of the tree generally represent individual hosts, whose names resolve to one or more IP addresses. It has become common in instances of virtual hosting within ISPs to have multiple customers share a single machine, therefore having multiple hostnames reolve to a single machine or server farm. CDN technology to date has additionally made it common for a single hostname to potentially resolve to any one of thousands of machines, though at any given time the set of possible hostnames is configured and fixed. The term zone usually relates to a set of hosts within a single authoritative server's namespace, whereas a single zone could include multiple sub-domains. Thus, e.g., if foo.com and eng.foo.com are both handled by the same nameserver, they are both in the same zone; but, if foo.com's nameserver refers a lookup to a different authority for resolution, they would be considered separate zones. [0033] An organization administrating a domain can divide it into subdomains. Each of those subdomains can be delegated to other organizations, in which case the organization receiving the delegation becomes responsible for maintaining all the data in that subdomain. The machines or programs executing on those machines that store information about the domain namespace are often called name servers. Name servers generally have complete information about some part of the doman namespace, namely, a zone. In such case, the name server is then said to have authority for that zone. A name server can also maintain cached information--or non-authoritative information, which is obtained by communicating with an authoritative name server. A zone contains the domain names and data that a domain contains, except for domain names and data that are delegated elsewhere. If a subdomain of the domain has not been delegated, the zone also contains the domain names and data in the subdomain. [0034] For purposes of the present invention, it is assumed that the enterprise operates (or has operated) DNS servers, that those servers manage data held in zones, and that each zone is a complete database for a particular "pruned" subtree of the domain space. As noted above, the zone data is authoritative, in that it contains an accurate (typically, the most accurate) representation available for a particular set of hostnames. At least one host in the enterprise stores enterprise content that the enterprise desires to make available to permitted end users (enterprise or partner employees). The enterprises use the existing DNS infrastructure to resolve client requests for content at that host. FIG. 3 illustrates the known prior art. In this example, it is assumed that the user desires to retrieve the enterprise internal homepage, which is a page located, in this example, at a URL such as hr.company.com/homepage.html. To this end, the user, or client 300, interacts with DNS via a resolver 302, typically residing on the same machine. As is well-known, a resolver is a client process that accesses a name server. Programs running on a host that need information from a domain name space use the resolver. In DNS BIND, for example, the resolver is a set of library routines that query a name server, interpret responses, and return the information to the program/process that requested it. Thus, with reference to FIG. 3. the resolver 302 interacts with the enterprise name server 304 to resolve client queries, which are then returned to the client. In the example shown in FIG. 3, the client resolver 302 makes a DNS request (in this example "who is hr.company.com?") to the corporate DNS name server 304, which returns a reply with an actual IP address y.y.y.y of the host on which the document is stored. The client resolver 302 then uses a DNS A record to build a connection to the actual web server 306, and issues an appropriate HTTP request to the web server 306 (e.g., via a GET hr.company.com/homepage.html). The web server 306 responds with the content reply to complete the request. [0035] According to this disclosure, the existing DNS infrastructure is augmented with one or more ECDN zone servers, and requests for content to be delivered over the ECDN are resolved through the one or more zone servers. In the simpliest IN. The client passes a host header to the ECDN IN to identify the actual host. [0036] FIG. 4 illustrates an embodiment of this disclosure. DNS requests typically occur over UDP, while HTTP requests typically occur over TCP. The enterprise is identified as "akamai" for illustrative purposes only. In this example, the client resolver 402 makes a DNS request for "hr.akamai.com" (step (1)) to the configured name server 404. This name server may be part of the legacy DNS infrastructure, as described above. This same name server 404 (running a suitable DNS BIND version) looks up hr.akamai.com and sees that the hostname resides within the zone configured to be within the ECDN (potentially after first resolving a CNAME from hr.akamai.com to itself for hr.ecdn.akamai.com). At step (3), the query is then referred to an ECDN name server 406, with the ECDN name server being configured to be authoritative for the ecdn.akamai.com zone. Some of the steps are compressed in the figure for clarity. At step (4), the ECDN zone server 406 could do one of three things: reply with an IP address x.x.x.x, pass the query to a different zone for hr.ecdn.akamai.com, or CNAME the request to a different hostname all together (restarting this process). In the drawing, the ECDN zone server has returned the IP address. Thus, at step (5), the client makes an HTTP request to the ECDN IN 408 at IP address x.x.x.x with a host header: identifying the host hr.akamai.com. At steps (6), (7) and (8), the ECDN content server handles the HTTP portion of the request in the standard fashion for a surrogate. The content is served from the origin server 406. [0037] The technique shown in FIG. 4 provides significant advantages. According to this disclosure, when a DNS query arrives for a hostname known to be associated with given content within the control of (i.e., managed by) the ECDN, the DNS server that receives the query preferably responds in one of three (3) ways: (a) handing back an IP address, e.g., for an ECDN intelligent node that knows how to obtain the requested content from a surrogate or origin server (as illustrated in FIG. 4); (b) executing a zone referral to a next (lower) level in a zone hierarchy (as defined in standard DNS), or (c) CNAMing to another hostname, thereby essentially restarting the lookup procedure. This new CNAME cause the resolution process to start back at the root and resolve a new path, probably along a different path in the hierarchy. At any particular level in the zone hierarchy, there is preferably an associated zone server. Zone servers may operate on the same machines but typically have different IP addresses. Each zone disclosed technique provides for a distributed, dynamic globally load balanced name service for an enterprise CDN. [0038] From a logical view, hierarchical DNS zones for resolving a particular query within a content delivery network are represented according to the present invention as illustrated in FIG. 5. In this example, it is assumed that an Internet CDN service provider operates over a namespace that is referred to as "akamai." This namespace includes a subdomain of *.ecdn.akamai.com, which, as noted above, is the virtual zone. In this example, the CDNSP manages the zone on the enterprise's behalf. In a normal name query, a request for foo.akamai.com returns some IP address, which is normally fixed. With web hosting, as is well-known, virtual hostnames are often used to allow a single address to handle multiple hostnames. For load balancing, a round robin technique is often used, allowing for one name to be spread across multiple addresses. For virtual zones, preferably there is a many-to-many mapping. In particular, one hostname is spread across a mesh of servers, and one server may handle any number of hostnames. Thus, for example, in a simple case, one could create three (3) hostnames (x, y, and z) and have four (4) servers (having IP addresses: 1.1.1.1, 1.1.1.2, 1.1.1.3, 1.1.1.4). Any hostname query could resolve to any server address, giving twelve (12) combinations of responses to any particular query. Thus, when the server for this virtual zone gets a request, it replies with any of the 12 possible addresses, although significant performance advantages could be gained if it were to use a more intelligent approach. In particular, it is also well-known that geographical and topological location information can be used to send clients to the closest address, and server load can be accounted for as well to try and optimize client response times. A combination of these (with other factors) can be used to determine the best server for handling a particular request. [0039] As described above, so-called Canonical Name (CNAME) records define an alias for an official hostname and are part of the DNS standard. In general, CNAMEing may be thought of as a means for DNS redirection of a client query. As illustrated above, DNS virtual zones and CNAMEs are used with standard surrogate servers to manage ECDN content in a given enterprise. [0040] The CNAME technique may be implemented any number of arbitrary times, while zone referral preferably occurs hierarchically. Generalizing, according to the invention, a given domain may be of the form "*.zone2.zone1.enterprise.com" where "zone1" represents a first or "top" level ECDN zone, "zone2" represents a second or "lower" level ECDN subzone of the top level zone, and so forth. As indicated in FIG. 6, a first level ECDN zone server 600 would handle top level resolutions, and one or more second level ECDN zone servers 602 would handle the next level resolutions, and so forth. The wildcard "*" in the above example indicates that further zone levels may be provided as well, with each level of the zone hierarchy then inheriting the namespace of the levels above it. Thus, the "virtual" nature of the invention is due to the fact that at no time is there necessarily a fixed set of possible hostnames--they may be, and most likely are, produced on-the-fly. Thus, for example, if the enterprise uses an Oracle database for storing financial data, the network administrator may generate domains such as "financials.ecdn.company.com," while the ECDN itself may be creating objects like "component.oracle.financials.ecdn.company.com," or "a.b.c.oracle.financials.ecdn.company.com" etc. Continuing with this example, if a particular ECDN zone server is provided with a name it does not fully recognize, it resolves that portion that is can recognize. Thus, while a first-level ECDN DNS server might do a rough cut of performance and refer a request to another zone, the next zone may have a finer grain metric used for next step resolution. [0041] According to this disclosure, and as illustrated in FIG. 6, assume that given enterprise content has been published to the ECDN and, in particular, to be managed by a second level zone server "hr.ecdn.akamai.com." In this example, a request for resolution of hr.akamai.com is first directed (through a CNAME) to the ecdn.akamai.com top level zone server 600, which then redirects the request (again through a CNAME) to one of the hr.ecdn.akamai.com second level zone servers 602. The second level name servers may be load balanced, e.g., based on network information collected at the zone server pair about connectivity between the remote office and possible locations of the backend servers that host the desired content, as well state information about the load on those servers. The second level name server returns the IP address of an optimal backend server based on the available metrics (e.g., server load, network connectivity, etc.). The use of hierarchical zones in this manner enables the enterprise to build and maintain more relevant "local" information about how client requests should be mapped, thus obviating the building, maintenance and delivery of enterprise-wide request routing maps. [0042] According to another feature, a software agent is executed in or in conjunction with a DNS server to create "zones" dynamically. In this approach, the agent, in effect, provides hints to the enterprise DNS name server as to which enterprise domains would be beneficially cached, as well as the addresses of near-by (local) ECDN intelligent nodes. FIG. 7 illustrates the agent-hinted CNAME technique. To create such as agent, the enterprise may keep a lightweight history databases within a [0043] DNS server, recording usage patterns of A (address) record lookup requests. Building heuristics around access patterns or looking for high or relatively increased lookups on a per domain basis might provide candidate hostnames. This would identify high usage HTTP sites. The hints might also come from firewalls, third party updates, in-line L4 switches, or from some other entity listening on port 80 (e.g., perhaps via an egress switch SPAN port). When a domain is identified as potentially interesting from a usage perspective, instead of returning the address of the actual domain, the address of a nearby ECDN IN may be substituted dynamically. After traffic dies down, the address could revert back to standard behavior. Thus, if the "hot" domain is "hr.akamai.com," the DNS name server dynamically CNAMEs the domain to the local ECDN zone server "ecdn.akamai.com" as needed. All traffic to the hr.akamai.com would then get sent to the close-by ECDN IN for handling. Agent interaction may be accomplished as a server plugin, through WCCP 2.0 transparent interception of DNS traffic, promiscuous DNS port snooping (SPAN mode), or providing an augmented DNS name server. An interface may be provided to notify the agent that all *.ecdn.enterprise.com servers are part of the ECDN and should be sent towards the appropriate servers. [0044] The disclosed subject matter provides numerous advantages over the prior art. According to the techniques described, (preferably via CNAME) to a set of one or more ECDN zone servers. A given ECDN zone server associates a name query with an ECDN intelligent node (or to a given content server). This association may be based on given conditions within the network, server load conditions, or some combination of various known metrics. With ECDN virtual zones, maps that associate a given request to a server may be local or global, static or dynamic. If multiple zones are used in a hierarchy, a first level ECDN zone server redirects the name query to a second level ECDN zone server, and so on, until the appropriate response is generated. ECDN zone servers may reside on the same machine or with existing DNS infrastructure or other machines in the enterprise. This enables the present invention to be readily integrated into an existing or legacy DNS solution with minimal reconfiguration. [0045] As described and illustrated above, a given zone may have a sub-zone and, according to the invention, that sub-zone may be managed by the enterprise or even by a third party such as an Internet content delivery network (ICDN) service provider. As an example of the latter scenario, assume that the enterprise outsources its streaming video delivery to the ICDN service provider. To this end, a host lookup for, say, allhands.streamingvideo.ecdn.company.com may be configured either statically or dynamically to a zone managed by a CDN overlay network. This has the effect of transparently allowing delivery to be seemlessly tied internally and externally via the same mechanism. [0046] FIG. 8 illustrates another example of how an enterprise may hand off management of a virtual zone to another entity, in this case an enterprise business partner. In this example, producer.com is the enterprise and consumer.com is the enterprise business partner having an end user. Producer.com desires to make given content available to the consumer.com end user, however, the producer.com namespace on which the content is published is not resolvable through public DNS and the document is not available publicly. Both producer.com and consumer.com are assumed to have implemented an ECDN system behind each of their respective firewalls, and each entity has a private namespace that they respectively manage. The goal is to make producer.com's namespace selectively available to an end user in consumer.com without, at the same time, exposing producer.com's namespace to the public DNS. Each ECDN deploys a relay machine which, preferably, is a combination of a surrogate origin server and an ECDN zone server. Separate machines/processes may be used, or the processes may be integrated or included with other functionality. Preferably, each relay machine is located within the enterprise DMZ, as illustrated in the drawing. The relay machine inside consumer.com does not know how to resolve a virtual zone associated with producer.com, but that machine does know that the relay machine inside producer.com's firewall does know how to make such a resolution. Likewise, the relay machine inside producer.com does not know how to resolve a virtual zone associated with consumer.com, but that machine does know that relay machine insideconsumer.com's firewall make know how to make such a resolution. As is well-known, boxes within a DMZ can be selectively allowed to communicate with one another while filtering other potentially hostile communications. [0047] Referring back to FIG. 8, assume that the document in question is referenced within consumer.com by a836.asp.producer.com/object.html. In this example, the zone "asp.producer.com" has been designated within consumer.com as being owned or managed by relay.consumer.com. Thus, relay.consumer.com will act as a surrogate for that document. When the consumer.com relay gets the host header a836.asp.producer.com, the relay machine uses that information to connect to the relay machine in producer.com. The relay machine in producer.com gets the HTTP request and it is configured to rewrite any host header (associated with a836.asp.producer.com in this example) to foo.producer.com. The relay machine at relay.producer.com then acts as a surrogate origin server for the web server at foo.producer.com, which hosts the document. The web server then returns the document via the surrogate origin server relay.producer.com, and that server rewrites the response header to point back to itself. The relay machine in producer.com then returns the data to the relay machine in consumer.com, which performs similar surrogate processing and returns the document to the requesting end user. This completes the processing. Thus, one of ordinary skill in the art will appreciate that, in this manner, producer.com has handed off management of the relay.producer.com virtual zone to consumer.com. This arrangement creates a VPN-like connection between the two namespaces to facilitate the document sharing. As illustrated, preferably the communications between the two relay machines are made over secure link using known techniques. [0048] Having thus described my invention, the following sets forth what I now claim. Patent applications by Charles J. Neerdaels, Aptos, CA US Patent applications by AKAMAI TECHNOLOGIES, INC. Patent applications in class Client/server Patent applications in all subclasses Client/server User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20130024503
CC-MAIN-2014-35
refinedweb
6,161
52.9
What is unit testing? A good enterprise computer system should be built as if it was made of Lego bricks. Your rules are only a piece of the puzzle. You'll need to go back to the Lego box to get pieces that talk to the database, make web pages, talk to other systems that you may have in your company (or organization), and so on. Just as Lego bricks can be taken apart and put together in many different ways, the components in a well-designed system should be reusable in many different systems. Before you use any of these components (or 'bricks') in your system, you will want to be sure that they work. For Lego bricks this is easy—you can just make sure that none of the studs are broken. For components this is a bit harder—often, you can neither see them, nor do you have any idea whether their inputs and outputs are correct. Unit testing makes sure that all of the component pieces of your application work, before you even assemble them. You can unit test manually, but just like FIT requirements testing, you're going to 'forget' to do it sooner or later. Fortunately, there is a tool to automate your unit tests known as Junit (for Java; there are also versions for many other languages, such as .Net). Like Drools and FIT, Junit is open source. Therefore, we can use it on our project without much difficulty. Junit is integrated into the JBoss IDE and is also pretty much an industry standard, so it's easy to find more information on it. A good starting point is the project's home page at. The following points can help you to decide when to use unit testing, and when to use the other forms of testing that we talked about: - If you're most comfortable using Guvnor, then use the test scenarios within Guvnor. As you'll see shortly, they're very close to unit tests. - If the majority of your work involves detailing and signing off against the requirement documents, then you should consider using FIT for Rules. - If you're most comfortable using Java, or some other programming language, then you're probably using (J)unit tests already—and we can apply these unit tests to rule testing. In reality, your testing is likely to be a mix of two or three of these options. Why unit test? An important point to note is that you've already carried out unit testing in the rules that we wrote earlier. OK, it was manual unit testing, but we still checked that our block of rules produced the outcome that we expected. All we're talking about here is automating the process. Unit testing also has the advantage of documenting the code because it gives a working example of how to call the rules. It also makes your rules and code more reusable. You've just proved (in your unit test) that you can call your code on a standalone basis, which is an important first step for somebody else to be able to use it again in the future. You do want your rules to be reused, don't you? Unit testing the Chocolate Shipments sample As luck would have it, our Chocolate Shipments example also contains a unit test. This is called DroolsUnitTest.java, and it can be found in the test/java/net/firstpartners/chap7 folder. Running the Junit test is similar to running the samples. In the JBoss IDE Navigator or package explorer, we select DroolsUnitTest.java, right-click on it, and then select Run as | Junit test from the shortcut menu. All being well, you should see some messages appear on the console. We're going to ignore the console messages; after all, we're meant to be automating our testing, not manually reading the console. The really interesting bit should appear in the IDE— the Junit test result, similar to the screenshot shown below. If everything is OK, we should see the green bar displayed—success! We've run only one unit test, so the output is fairly simple. From top to bottom we have: the time it took to run the test; the number of errors and failures (both zero—we'll explain the difference shortly, but having none of them is a good thing), the green bar (success!), and a summary of the unit tests that we've just run (DroolsUnitTest). If you were running this test prior to deploying to production, all you need to know is that the green bar means that everything is working as intended. It's a lot easier than inspecting the code line by line. However, as this is the first time that we're using a unit test, we're going to step through the tests line by line. A lot of our Junit test is similar to MultipleRulesExample.java. For example, the unit test uses the same RuleRunner file to load and call the rules. In addition, the Junit test also has some automated checks (asserts) that give us the green bar when they pass, which we saw in the previous screenshot. What just happened? Probably the easiest way to understand what just happened is to walk through the contents of the DroolsUnitTest.java file. Our unit code starts with the usual package information. Even though it is in a separate folder, Java is fooled into using the same package. package net.firstpartners.chap7; In our imports section (list of other files that we need), we have a mix of our domain objects (the facts such as CustomerOrder) that we used earlier for holding information. We also have the logging tools. What is new is the imports of Assert (part of our automatic checking tool) and importing the junit test (the template for our unit test). import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; import static org.junit.Assert.assertTrue; import java.util.HashMap; import net.firstpartners.chap6.domain.CustomerOrder; import net.firstpartners.chap6.domain.OoompaLoompaDate; import net.firstpartners.drools.RuleRunner; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.junit.Test; The start of the main part of the file may be renamed to DroolsUnitTest, but what it does is the same. The rules are still read from exactly the same file as before. public class DroolsUnitTest { private static Log log = LogFactory.getLog(DroolsUnitTest.class); private static final String NEXT_AVAILABLE_SHIPMENT_DATE = "nextAvailableShipmentDate"; private static final String[] RULES_FILES = new String[] { "src/ main/java/net/firstpartners/chap6/shipping-rules.drl" }; Earlier, our starting point was called main so that Java knew where we wanted it to start when we pressed the green Go button. This time, our start method is called testShippingRules and it's marked with a @Test flag so that we know it's an entry point. We can have multiple tests, each marked with @Test. The Junit framework will test each one in turn. The rest of this code snippet, which involves setting up and calling the business rules via RuleRunner, is exactly the same as our previous 'calling the rule engine' samples. @Test public void testShippingRules() throws Exception { // Initial order CustomerOrder candyBarOrder = new CustomerOrder(2000); HashMap<String, Object> startDate = new HashMap<String, Object>(); startDate.put(NEXT_AVAILABLE_SHIPMENT_DATE, new OoompaLoompaDate(2009, 02, 03)); // Holidays OoompaLoompaDate holiday2 = new OoompaLoompaDate(2009, 2, 10); OoompaLoompaDate holiday1 = new OoompaLoompaDate(2009, 3, 17); // Call the rule engine Object[] facts = new Object[3]; facts[0] = candyBarOrder; facts[1] = holiday1; facts[2] = holiday2; // A lot of the running rules uses the same code. The RuleRunner (code // in this project) // keeps this code in one place. It needs to know // - the name(s) of the files containing our rules // - the fact object(s) containing the information to be passed in and // out of our rules // - a list of global values new RuleRunner().runRules(RULES_FILES, facts, startDate); In our previous example, once we called the rules, we printed the results out to the screen for manual inspection. This time things are different. We want to make this checking automatic. Hence, we have added following new lines in the final snippet, using assertXXX to check if the values that we get back from the rules are as expected: // Check that the results are as we expected assertEquals( "No more bars should be left to ship", 0, candyBarOrder .getCurrentBalance()); assertEquals( "Our initial order balance should not be changed", 2100, candyBarOrder.getInitialBalance()); assertNotNull( "Our list of shipments should contain a value", candyBarOrder.getShipments()); assertTrue( "We should have some Cusomter Shipments", candyBarOrder.getShipments().size() > 5); } } In general, our assert checks follow the format: Assert( "message if the value is not as we expect" , valueWeExpected, valueWeGotWhenWeRanTheTest) - The first line (assertEquals) compares the number of candy bars that should still be left to ship after our rules have fired (should be 0) - The second line (assertEquals) ensures that the initial order is not changed by the rules, and remains at 2100 - The next line (assertNotNull) ensures that the list of shipments that we made is not empty - The final line (assertTrue) checks that we have more than five shipments made to a customer Is it best to have multiple tests or multiple asserts within a single test? It is possible to have multiple tests, such as someTest() methods (each marked with @Test), and/or multiple tests using assertXXX within a method. A combination of both is probably the best. Multiple asserts in one method are great when your test is difficult to set up, but the test will stop at the first assert that turns out to be false. This means you can solve the first error, but your test will then stop at the next assert that fails. Having these asserts in separate test methods shows you instantly how many problem(s) you have—at the price of having some duplicated setup code. What if it goes wrong? We were lucky that our tests worked the very first time. Unfortunately, this is almost impossible to achieve. For example, assume that we mistakenly wrote a rule that changed the initial balance. assertEquals( "Our initial order balance should not be changed", 2100, candyBarOrder.getInitialBalance()); In this case, when we come to check, our test will fail. We will get a red bar in our unit tests, detailing what has gone wrong (similar to the screenshot below). The message in our assert (Our initial order balance should not be changed) and other details (such as line numbers) are provided to help us trace what is going wrong. You'll also notice that the Failures count is now 1. Failures and errors So what's the difference between failures and errors? Failures are things (such as the above assert) that we explicitly check for. Errors are the unexpected things that go wrong. Remember our NullPointerException from the previous section in FIT? That is, the problem that we face when something is empty that shouldn't be. That exception is shown as an error in Junit with a red bar (again), along with the details of the problem to help you fix it.It's simple—green is good and red is bad. But remember, it's always better to catch mistakes early. Testing an entire package Typically, you write a unit test at the same time as writing a set of rules to confirm that the functionality is 'done'. After you're 'done', you (or one of your team) should run all of the tests in the project to make sure that your new work hasn't broken any of the rules or code that already existed. There are a couple of ways to automatically do this overnight (as part of a build control tool such as Cruise Control) as part of your build scripts, or you can run all of the tests from the JBoss IDE (akin to the 'run all scenarios in package' that we saw in Guvnor). It's pretty easy to run all of your unit tests in one go. All you have to do is this: - From the toolbar (at the very top of the JBoss IDE) select Run. - In the dialog box that is dispalyed, selectJunit (near the lower-left of the screen) and then click on New launch configuration (the icon on the upper-left of the screen, as shown in the screenshot). - On the righthand side, fill in the following values: - Click on Apply (to save it, so that you don't have to go through all of the steps the next time), and then click on Run. As before, the JBoss IDE will chug away for a couple of moments, and then the popup Junit screen will be displayed. As before, if any of the multiple tests fail, you'll see a red bar, along with details of all of the items that went wrong. When all of your tests pass, you can be sure that your rules are of top quality?. Summary Thus in this two part series on testing we have seen : - how to test our rules using Guvnor, as well as using FIT for rule testing against requirements documents. - how to test our rules using unit testing with the help of Junit. If you have read this article you may be interested to view :
https://www.packtpub.com/books/content/testing-your-jboss-drools-business-rules-using-unit-testing
CC-MAIN-2015-48
refinedweb
2,221
70.63
- NAME - SYNOPSIS - DESCRIPTION - USAGE - SEE ALSO - REQUIRES - TODO - INSTALLATION - AUTHOR - LICENSE AND COPYRIGHT NAME Declare::Constraints::Simple - Declarative Validation of Data Structures SYNOPSIS use Declare::Constraints::Simple-All; my $profile = IsHashRef( -keys => HasLength, -values => IsArrayRef( IsObject )); my $result1 = $profile->(undef); print $result1->message, "\n"; # 'Not a HashRef' my $result2 = $profile->({foo => [23]}); print $result2->message, "\n"; # 'Not an Object' print $result2->path, "\n"; # 'IsHashRef[val foo].IsArrayRef[0].IsObject' DESCRIPTION The main purpose of this module is to provide an easy way to build a profile to validate a data structure. It does this by giving you a set of declarative keywords in the importing namespace. USAGE This is just a brief intro. For details read the documents mentioned in "SEE ALSO". Constraint Import use Declare::Constraints::Simple-All; The above command imports all constraint generators in the library into the current namespace. If you want only a selection, use only: use Declare::Constraints::Simple Only => qw(IsInt Matches And); You can find all constraints (and constraint-like generators, like operators. In fact, And above is an operator. They're both implemented equally, so the distinction is a merely philosophical one) documented in the Declare::Constraints::Simple::Library pod. In that document you will also find the exact parameters for their usage, so this here is just a brief Intro and not a coverage of all possibilities. Building a Profile You can use these constraints by building a tree that describes what data structure you expect. Every constraint can be used as sub-constraint, as parent, if it accepts other constraints, or stand-alone. If you'd just say my $check = IsInt; print "yes!\n" if $check->(23); it will work too. This also allows predefining tree segments, and nesting them: my $id_to_objects = IsArrayRef(IsObject); Here $id_to_objects would give it's OK on an array reference containing a list of objects. But what if we now decide that we actually want a hashref containing two lists of objects? Behold: my $object_lists = IsHashRef( HasAllKeys( qw(good bad) ), OnHashKeys( good => $id_to_objects, bad => $id_to_objects )); As you can see, constraints like IsArrayRef and IsHashRef allow you to apply constraints to their keys and values. With this, you can step down in the data structure. Applying a Profile to a Data Structure Constraints return just code references that can be applied to one value (and only one value) like this: my $result = $object_lists->($value); After this call $result contains a Declare::Constraints::Simple::Result object. The first think one wants to know is if the validation succeeded: if ($result->is_valid) { ... } This is pretty straight forward. To shorten things the result object also overloads it's boolean context. This means you can alternatively just say if ($result) { ... } However, if the result indicates a invalid data structure, we have a few options to find out what went wrong. There's a human parsable message in the message accessor. You can override these by forcing it to a message in a subtree with the Message declaration. The stack contains the name of the chain of constraints up to the point of failure. You can use the path accessor for a joined string path representing the stack. Creating your own Libraries You can declare a package as a library with use Declare::Constraints::Simple-Library; which will install the base class and helper methods to define constraints. For a complete list read the documentation in Declare::Constraints::Simple::Library::Base. You can use other libraries as base classes to include their constraints in your export possibilities. This means that with a package setup like package MyLibrary; use warnings; use strict; use Declare::Constraints::Simple-Library; use base 'Declare::Constraints::Simple::Library'; constraint 'MyConstraint', sub { return _result(($_[0] >= 12), 'Value too small') }; 1; you can do use MyLibrary-All; and have all constraints, from the default library and yours from above, installed into your requesting namespace. You can override a constraint just by redeclaring it in a subclass. Scoping Sometimes you want to validate parts of a data structure depending on another part of it. As of version 2.0 you can declare scopes and store results in them. Here is a complete example: my $constraint = Scope('foo', And( HasAllKeys( qw(cmd data) ), OnHashKeys( cmd => Or( SetResult('foo', 'cmd_a', IsEq('FOO_A')), SetResult('foo', 'cmd_b', IsEq('FOO_B')) ), data => Or( And( IsValid('foo', 'cmd_a'), IsArrayRef( IsInt )), And( IsValid('foo', 'cmd_b'), IsRegex )) ))); This profile would accept a hash references with the keys cmd and data. If cmd is set to FOO_A, then data has to be an array ref of integers. But if cmd is set to FOO_B, a regular expression is expected. SEE ALSO Declare::Constraints::Simple::Library, Declare::Constraints::Simple::Result, Declare::Constraints::Simple::Base, Module::Install REQUIRES Carp::Clan, aliased, Class::Inspector, Scalar::Util, overload and Test::More (for build). TODO Examples. A list of questions that might come up, together with their answers. A Customconstraint that takes a code reference. Create stack objects that stringify to the current form, but can hold more data. Give the Messageconstraint the ability to get the generated constraint inserted in the message. A possibility would be to replace __Value__ and __Message__. It might also accept code references, which return strings. Allow the IsCodeRefconstraint to accept further constraints. One might like to check, for example, the refaddr of a closure. A Capturesconstraint that takes a regex and can apply other constraints to the matches. ??? Profit. INSTALLATION perl Makefile.PL make make test make install For details read Module::Install. AUTHOR Robert 'phaylon' Sedlacek <phaylon@dunkelheit.at> LICENSE AND COPYRIGHT This module is free software, you can redistribute it and/or modify it under the same terms as perl itself.
https://metacpan.org/pod/Declare::Constraints::Simple
CC-MAIN-2016-30
refinedweb
946
54.93
Introduction A common need when writing an application is loading and saving configuration values in a human-readable text format. For example, if you need to pass in database configuration information at run-time, you might want to have a file that stores the username, password, and database name in a plain-text file that you can modify. This is where INI configuration files come in. Python standard library has a configparser module that can read and write INI style files. We will look at how to read and write INI files as well as some alternatives for storing configuration. Note this is for Python 3. Python 2 does have a ConfigParser.ConfigParser class but it is not exactly the same. Writing INI files To create an INI file and write it to disk, you first need to create an instance of the ConfigParser class. Then you treat it as a dictionary and populate the data you want. Then you use the write() method provided by the ConfigParser object to store the data in a file. Here is an example: from configparser import ConfigParser config = ConfigParser() config['main_section'] = { 'key1': 'value1', 'key2': 123, 'key3': 123.45, } with open('config.ini', 'w') as output_file: config.write(output_file) # Example output in `congig.ini`: """ [main_section] key1 = value1 key2 = 123 key3 = 123.45 """ Reading INI files Let's try to read the file that we created in the last section. We will load the contents in to a config parser and access some of the values. Note that even though we were able to output integers and floating point values, everything gets stored in the INI file as a string. You will have to be explicit when loading the value if you want it to be a specific data type. It is also possible to provide default values in case the key you are looking for does not exist in the INI file. from configparser import ConfigParser config = ConfigParser() config.read('config.ini') # Get a list of all sections print('Sections: %s' % config.sections()) # You can treat it as an iterable and check for keys # or iterate through them if 'main_section' in config: print('Main section does exist in config.') for section in config: print('Section: %s' % section) for key, value in config[section].items(): print('Key: %s, Value: %s' % (key, value)) # If you know exactly what key you are looking for, # try to grab it directly, optionally providing a default print(config['main_section'].get('key1')) # Gets as string print(config['main_section'].getint('key2',)) print(config['main_section'].getfloat('key3')) print(config['main_section'].getboolean('key99', False)) Alternatives Using INI files is only one option for reading and writing configuration values. There are many other options but here are a few worth considering: - JSON files - Python's standard json module is good for reading and writing JSON data which can be written and read from a file. The syntax for JSON is a little more complex to edit by hand, but still possible. This can be good if your configuration is too complex to store in INI format. - Python files - You can create a .pyfile and store your variables in there directly. Then obtaining your configuration is as simple as importing your Python file and accessing the variables. The big benefit here is that you can include some logic if necessary, and store complex data structures. Conclusion After reading this, you should understand how to read and write INI format configuration files in Python as well as some alternative options.
https://www.devdungeon.com/content/python-configuration-files-ini
CC-MAIN-2020-45
refinedweb
583
56.15
Opened 4 years ago Closed 4 years ago #8250 closed bug (fixed) cgrun072 (optllvm) failing Description - Platform: OS X 10.8.4 x86_64 - GHC Version 7.7.20130904 (built with gcc-4.8) To reproduce this: $ make test TEST=cgrun072 WAY=optllvm Expected output (for failure): =====> cgrun072(optllvm) 172 of 3749 [0, 0, 2] cd ./codeGen/should_run && '/Users/leroux/Dropbox/src/ghc/ghc-validate/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -o cgrun072 cgrun072.hs -O -fllvm >cgrun072.comp.stderr 2>&1 cd ./codeGen/should_run && ./cgrun072 </dev/null >cgrun072.run.stdout 2>cgrun072.run.stderr Actual stdout output differs from expected: --- ./codeGen/should_run/cgrun072.stdout 2013-09-04 02:22:32.000000000 -0500 +++ ./codeGen/should_run/cgrun072.run.stdout 2013-09-07 03:27:09.000000000 -0500 @@ -1,3 +1,6 @@ OK -OK +FAIL + Input: 1480294021 +Expected: 2239642456 + Actual: -2055324840 OK *** unexpected failure for cgrun072(optllvm) The failing test case is test_bSwap32. Here are some relevant snippets. bswap and cgrun072 were added in #7902.: bswap32 :: Word32 -> Word32 bswap32 (W32# w#) = W32# (byteSwap32# w#) slowBswap32 :: Word32 -> Word32 slowBswap32 w = (w `shiftR` 24) .|. (w `shiftL` 24) .|. ((w `shiftR` 8) .&. 0xff00) .|. ((w .&. 0xff00) `shiftL` 8) test_bSwap32 = test casesW32 bswap32 slowBswap32: extern StgWord32 hs_bswap32(StgWord32 x); StgWord32 hs_bswap32(StgWord32 x) { return ((x >> 24) | ((x >> 8) & 0xff00) | (x << 24) | ((x & 0xff00) << 8)); } Here are a few things to look at or try. - - - Maybe take an object dump and take what's going on. - gdb debugging? Attachments (4) Change History (12) comment:1 Changed 4 years ago by comment:2 Changed 4 years ago by This is a sign-extension issue: the expected output is 2239642456 = 0x857e3b58, which treated as a signed 32-bit integer is 2239642456 - 2^32 = -2055324840. Obviously, show on a Word32 shouldn't be producing output starting with a minus sign! - The NCG generates code for MO_BSwap W32that does the byte swap and clears the high 32 bits of the result. (This isn't directly relevant to this test failure, but provides context for the rest.) - The LLVM backend generates LLVM code that does the byte swap and then sign-extends the result to 64 bits, like this: %ln2uw = trunc i64 %ln2uv to i32 %ln2ux = call ccc i32 (i32)* @llvm.bswap.i32( i32 %ln2uw ) %ln2uy = sext i32 %ln2ux to i64The reason is that genCallSimpleCastdoes castVarsbefore and after the call to llvm.bswap.i32, and castVarsproduces LM_Sextfor a widening conversion. That's what's causing the strange test output—the payload of a Word32#isn't supposed to have any of the high 32 bits set. - The primop BSwap32Opis documented as primop BSwap32Op "byteSwap32#" Monadic Word# -> Word# {Swap bytes in the lower 32 bits of a word. The higher bytes are undefined. }so both the NCG and LLVM backends are correct, and the test is wrong. It should be doing bswap32 :: Word32 -> Word32 bswap32 (W32# w#) = W32# (narrow32Word# (byteSwap32# w#))and the same for bswap16. - The definitions of byteSwap32and byteSwap16in GHC.Wordare also wrong for the same reason. They should be byteSwap32 :: Word32 -> Word32 byteSwap32 (W32# w#) = W32# (narrow32Word# (byteSwap32# w#))This doesn't currently make a difference, though, because base is built (by default?) with the NCG, which happens to produce a zero-extended result. An alternative approach would be to redefine the BSwap32Op primop to clear the higher bytes of its result. Then the LLVM backend would need to be fixed somehow (looks a little tricky to me) but the cgrun072 test and GHC.Word would be correct as they are. comment:3 Changed 4 years ago by A correction to the last bullet point in my previous comment: fixing GHC.Word.byteSwap{16,32} actually *does* matter because a user program built with -O -fllvm will inline those functions at the Haskell level, and then the LLVM backend will generate the wrong code for the byte-swap primop. import GHC.Word main = print $ byteSwap32 (0xaabbccdd :: Word32) -- built with ghc -O -fllvm -- expected output: 3721182122 -- actual output: -573785174 The cgrun072 test should test the GHC.Word.byteSwap* wrappers, too. comment:4 Changed 4 years ago by The first two patches add more tests to cgrun072. The other two patches fix the use of byteSwap16/32# in cgrun072 and in GHC.Word, as described above (treating the description of the byteSwap primops as leaving the higher bytes of the result undefined as correct). comment:5 Changed 4 years ago by Looks good to me - thanks for the investigation Reid. I'll probably squash these into one patch for base and merge them later tonight. comment:7 Changed 4 years ago by Merged, thanks Reid! Thanks for catching this, I can reproduce it on Linux/x86_64 also. There's a similar issue with bswap16 that cgrun072 doesn't catch, because there are no bswap16 test cases with low byte >= 128.
https://ghc.haskell.org/trac/ghc/ticket/8250?cversion=1&cnum_hist=2
CC-MAIN-2017-39
refinedweb
810
74.08
Build your own Azure CLI Extensions Michael Crump ・4 min read This is Part 2 of 2. You can check out Part 1 below. Build your own Azure CLI Extensions Azure CLI extensions are really helpful. You can read about them in this Azure Tip. You can use extensions from the list here, which you can also get when you enter the az extension list-available --output table command in the Azure CLI. And you can also build Azure CLI extensions yourself. You do that by creating a Python wheel, which is a package of Python code. Let me show you how you can create and use your own Azure CLI extension. Creating an Azure CLI Extension Azure CLI extensions can currently only been Python wheel packages. So to create a new extension, you need to have the following prerequisites installed on your development machine: - Python (version 2.7.9 or 3.4 or up). Download it here - Python wheel (once Python is installed, you can get wheel by using the command pip install wheel) Now that we have Python and wheel installed, we can start to create the extension. 1.) We'll start by creating a new folder that holds all of the files that we need for the extension. Let's call it Tipsextension 2.) In the Tipsextension folder, we'll create some files that make up the extension. These are: - (folder) azext_tipsextension - __init__.py - setup.cfg - setup.py 3.) Now, we will fill in the content of the files. We'll start with the setup.py file. This file will tell the Azure CLI what is in the extension. We'll put in this code: from codecs import open from setuptools import setup, find_packages VERSION = "0.0.1" CLASSIFIERS = [ 'Development Status :: 4 - Beta', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'License :: OSI Approved :: MIT License', ] DEPENDENCIES = [] setup( name='tipsextension', version=VERSION, description='My CLI extension', long_description='An example Azure CLI Extension.', license='MIT', author='MY CORP.', author_email='example@contoso.com', url='', classifiers=CLASSIFIERS, packages=find_packages(), install_requires=DEPENDENCIES ) 4.) The next file that we are going to fill, is the setup.cfg file. This file will be used by Python wheel to create the package that the CLI can use. This file is short and will contain only this code: [bdist_wheel] universal=1 5.) The last file that we'll fill is the __init__.py file in the azext_tipsextension folder. This file contains the actual functionality of the extension. It is written in Python. I'm not a Python expert myself, but it's easy enough to pick up. We'll put this code in the file: from knack.help_files import helps from azure.cli.core import AzCommandsLoader helps['gimme tips'] = """ type: command short-summary: Points you to a world of Azure Tips and Tricks. """ def showtipsurl(): print('Azure Tips and Tricks - The Complete List:') class TipsAndTricksCommandsLoader(AzCommandsLoader): def __init__(self, cli_ctx=None): from azure.cli.core.commands import CliCommandType custom_type = CliCommandType(operations_tmpl='azext_tipsextension#{}') super(TipsAndTricksCommandsLoader, self).__init__(cli_ctx=cli_ctx, custom_command_type=custom_type) def load_command_table(self, args): with self.command_group('gimme') as g: g.custom_command('tips', 'showtipsurl') return self.command_table def load_arguments(self, _): pass COMMAND_LOADER_CLS = TipsAndTricksCommandsLoader 6.) Next, we need to build the application and compile it into a wheel package. We can do that with the command below. The directory should match the directory that contains all of the extension files cd /Source/extension/Tipsextension python setup.py bdist_wheel This output of the build result looks like this and produces a .whl file. (Results of building the extension) 7.) Now, we can try the extension out. We can do that by installing it with the following command az extension add --source C:\Source\extension\tipsextension\dist\tipsextension-0.0.1-py2.py3-none-any.whl 8.) When the extension is installed, you can see the help by using az gimme tips -h or get the results by using az gimme tips (Trying the extension) The above is the happy flow of developing an Azure CLI Extension. Usually, you need to debug the extension and have more control when you are developing it. You can read more about that here. And you can also publish the extension so that people can start using it. You can read about how to do that here. Here are some of the published CLI Extensions that I find very useful: - find, which helps you to get contextual information with the CLI - webapp, which has some extra commands for managing Web Apps, ike creating a new one from the CLI - resource-graph, which enables you to query the Azure Resource Graph Conclusion Azure CLI extensions are a very powerful way to make the CLI work for you. The steps to develop an Azure CLI extension are relatively easy. The downside (to me) is that it is currently only possible to develop the extensions in Python. Maybe in the future, other languages will be supported. In any case, it is wonderful that it is possible to extend the CLI. Go and develop your ultimate extension and share it with the community! Stay connected with me on various social platforms for daily software development news. How I Re-Wrote my Portfolio Site I used CSS animations, P5.js, and FlexBox to create a super fun portfolio site -- and dramatically increased my viewership!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/azure/build-your-own-azure-cli-extensions-42fk
CC-MAIN-2020-16
refinedweb
918
58.58
Uncyclopedia:Village Dump/archive17 From Uncyclopedia, the content-free encyclopedia Mass move request Can someone with the power to do so move every UnNews: article to just the article name? So if the title is "UnNews:Something happens," it would get move to "Something happens". The reason behind this insanity is that the talk pages for UnNews: namespace are completely botched due to UnNews not being a real namespace. Much appreciated. General Specific 13:23, 30 Oct 2005 (UTC) The talk pages may have a messed up location (Ie, being talk:unnews:article rather then unnews talk:article) but they still work, do they not? Is there anything besides this cosmetic problem? If there isnt, it makes sense to keep them with the unnews in front of them, as they are not to be confused with a normal article, the writing style is different and the goal is different. Rangeley 17:05, 30 Oct 2005 (UTC) - Yeah, I'm not sure I see the point of moving the pages. It's probably for the best just to leave them as "UnNews:News" instead of "News" or "News (UnNews)" (as you asked on my talk page). Less trouble all around. --—rc (t) 17:11, 30 Oct 2005 (UTC) - I'd be fine with it but it screws up and ends up doing "Talk:UnNews:News" for discussion pages and the page ends up not in the Talk: namespace. Aside from that, it's getting really annoying creating a million different workarounds for linking to "UnNews:" articles by actual name. General Specific 17:40, 30 Oct 2005 (UTC) - [[UnNews:Hurricane Beta released|]] (with a pipe character before the ]]) should link Hurricane Beta released. Talk:UnNews:... seems to only contain nine unique pages at the moment. --Carlb 18:12, 30 Oct 2005 (UTC) - Are you sure you can't do this? I don't see a problem with doing the move except a little excess work that I'd be glad to clean up. Proper namespacing would help out tremendously with DynamicPageLists once Wikicities upgrades to 1.5. General Specific 18:17, 30 Oct 2005 (UTC) - Not sure if it would help. Proper namespacing would be to use the custom namespace support described in m:Help:Namespace#Custom_namespaces and m:Help:Custom_namespaces. This would need to be done for us by Wikicities; it's not something to which our admins have access. Otherwise, I don't see how renaming UnNews:Title to just plain Title gets us any closer to being able to use these with DynamicPageList - the article would still be in the main namespace either way? --Carlb 18:30, 30 Oct 2005 (UTC) - DynamicPageList displays the full title of each article. So it would always list "UnNews:Article name" where it should be just "Article name" (even though the actual page is Article name. If you put it in the main namespace it gets rid of a lot of unneccessary complexity. General Specific 18:46, 30 Oct 2005 (UTC) - m:DynamicPageList and m:DynamicPageList2 support a parameter shownamespace=false which would do this (at least for a real namespace); problem is DynamicPageList itself requires MediaWiki 1.5 so is of no use to us yet. The main Wikipedia's on version 1.6, we're not. Issue was raised on Uncyclopedia:Report_a_problem#Dynamic_Page_List's_and_Inputbox_extensions in August with no response. --Carlb 19:11, 30 Oct 2005 (UTC) - Just an observation, we have a lot of fake "namespaces". Wilde:, HowTo:, Undictionary: (reduced but still around), all the front page themed replacements, as well as all the language codes. The talkpages for all are in the same boat, but we survive. Fixing Unnews: would not even put a dent in the (insignificant) talk page problem. Making them real namespaces would be possible but probably not worth it. I vote Don't move --Splaka 22:11, 30 Oct 2005 (UTC) - I'd hesitate to consider a front-page theme a "namespace" as it's only one page;, and are not quite deprecated in the main Uncyclopedia by the creation of separate projects for these languages; Undictionary: still needs to drop to its final size (26 pages); most new pseudo-namespace entries are likely to be in Wilde:, UnNews: or HowTo: (at least if one were to ignore Zork/the game) --Carlb 22:45, 30 Oct 2005 (UTC) - They are all fake "namespaces" (if they are used for one page, 26, or 500) in that their talk pages are not "Namespace_talk:" but "Talk:Namespace:". If the original point of the moves GS wants is to eliminate such formats (" The reason behind this insanity is that the talk pages for UnNews: namespace are completely botched due to UnNews not being a real namespace.") then removing Unnews: prefixes will not help any of the others. Hence, it shouldn't be done. --Splaka 22:49, 30 Oct 2005 (UTC) More words needed for Mad Libs That is all. --Algorithm 02:33, 30 Oct 2005 (UTC) Morse code Check it out, I morse coded the WHOLE thing :) --Maj Sir Insertwackynamehere CUN VFH VFP Bur. CMInsertwackynamehere | Talk 23:11, 29 Oct 2005 (UTC) DynamicPageList extension Can someone please install the DynamicPageList2 extension? We need it for UnNews. General Specific 20:55, 29 Oct 2005 (UTC) - You would probably have to talk to Angela or JasonR about this because none of the Uncyclopedians have access to that server stuff. Try IRC or the centeral WikiCities wiki. --Paulgb Talk 00:14, 30 Oct 2005 (UTC) - This would also require upgrade to MediaWiki 1.5 or later; wikicities:Talk:Technical_support#When's_1.5_coming? was saying "at least a couple of weeks" back in mid-July but nothing's been done. You could try raising this issue again there or on Wikicities:Talk:Community_portal? --Carlb 19:45, 30 Oct 2005 (UTC) Videogame wars Does anyone else think this is getting just a little out of hand? I suggest that we only do videogame war pages for really obscure wars, as that might actually be funny. General Specific 15:10, 29 Oct 2005 (UTC) - As long as they're funny and don't get old, they're fine. --Savethemooses 15:17, 29 Oct 2005 (UTC) - Most of them seem pretty funny and somewhat historically relevant. It's nice to hear all that stuff from history class in a way that's actually entertaining. --Spintherism 20:50, 29 Oct 2005 (UTC) - D00dz, ph33r th3 733t h@c]<0r$ @nd th3r m@d w@rz! I mean I think all wars should be fought over IRC channels using elite speak. Less people die that way. :) --Loke 22:43, 29 Oct 2005 (UTC) Désencylopedie Those wacky Québécois! We need to spork more articles and steal more ideas from Désencyclopedie. I haven't gone near French since learning it in school twenty years ago, but I know enough to steal their good jokes. I suggest others do the same - David Gerard 13:09, 29 Oct 2005 (UTC) - Just so long as they aren't plays on words - those usually don't translate well... --Carlb 18:15, 29 Oct 2005 (UTC) Uncyclopedia Imperial Colonisation Wikipedia has a collaboration system where groups come together and work on articles together for a certain amount of days, to get it into shape. I and a few others have thought of implementing it here, where we can have a certain article each week for different catagories, such as Geography, History, etc. This would also be helpful to get people started, and in the end would create better quality articles. What does everyone think? Rangeley 05:05, 29 Oct 2005 (UTC) I wholeheartedly endorse this proposal. --KP CUN 06:53, 29 Oct 2005 (UTC) This already happens on an ad-hoc basis although it seems to be a collection of articles rather than one, noted examples are KP's MLB series and User:Marcos Malos Disney articles. That being said it wouldn't hurt putting this on a formal basis if thats what you want to do.--The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 08:17, 29 Oct 2005 (UTC) To get things started, I made the Uncyclopedia:Imperial Colonization page, along with two projects. Rangeley 15:14, 29 Oct 2005 (UTC) We should have the various colonizations rotated at random on the main page, much like the feature image. Rangeley, you may want to change the dates when the collaborations begin. Give a week for nominations and votes, start the collaborations next week. --KP CUN 16:54, 29 Oct 2005 (UTC) Well, I was looking to get things started so that there would be things to work on for this week, but if it makes morse sense to wait I guess we can. Rangeley 18:45, 29 Oct 2005 (UTC) Isn't this what Uncyclopedia:Pee Review is for?--Jordanus 20:55, 29 Oct 2005 (UTC) The colonization is a little bit more directed. While Uncyclopedia:Pee Review provides support for individual efforts, the colonization is a larger collaborative effort to fill out large areas of Uncyclopedia with content. --KP CUN 22:57, 29 Oct 2005 (UTC) I have added another week for the biography nominations. As for the others, they can be used as a 'test run' to see how colonizations will work out. So Afghanistan, Air Conditioning, Great Chicago Fire, and Lithium are this weeks chosen articles for colonization. Let the edits commence. Rangeley 07:00, 30 Oct 2005 (UTC) I demand Ballmer be brought back. Just because it was overused doesn't mean it deserves deletion. I honestly thought it was funny. It may not have been very good on some articles, but there were a ton of really hilarious ones. --Nintendorulez 16:01, 28 Oct 2005 (UTC) - Mr. Ballmer is still here it's just that he has transcended template form. If there are one or two articles that would be especially Ballmerable, the quote can be always entered manually. --Spintherism 21:56, 28 Oct 2005 (UTC) - And while we're at it, let's put more COWBELL in the articles so that our COW-ORKERS can get a laugh because GEORGE BUSH DOESN'T CARE ABOUT STEVE BALLMER. And I, FOR ONE, WELCOME OUR NEW BALLMER OVERLORDS since THEY BURNING YOUR DOG. Get the pattern here? Let's kill all humor by overusing it. --—rc (t) 22:36, 28 Oct 2005 (UTC) - Yeah. Dude, nothing's stopping you from just manually adding the code. The template was removed so that total retards are unable to put the Ballmer quote on every page everywhere. Smart people who know how to actually copy the code can still put it anywhere. Of course, it won't stay long with Flammable on the loose ... --Nerd42 01:05, 29 Oct 2005 (UTC) - Ya know, that paragraph reads like the seed of an article - David Gerard 09:58, 29 Oct 2005 (UTC) Insert Header Here, dude, Nope I would suggest that the page on Time Cube be copied straight from Wikipedia [1] because it is a lot funnier than what is up now.--AsslessChaps McButtviolation - Plagiarism isn't funny. Sporking, however, is.--Sir Flammable KUN 14:22, 28 Oct 2005 (UTC) - It's like trying to make parodies of Xenu or Space opera in Scientology. it's REALLY HARD to outdo the original ... - David Gerard 20:32, 28 Oct 2005 (UTC) - Or Mormonism. --Spintherism 21:53, 28 Oct 2005 (UTC) - Mormonism? Just drop the second 'm'. And then there's the Agnostic branch of the Jehovah's Witnesses... they knock on doors but aren't quite sure why. As for a Time Cube? Does anyone have a square alarm clock handy? --Carlb 16:37, 29 Oct 2005 (UTC) The Uncyclopedia drinking game - I started this article, but considering what it is I thought the community in general should contribute to it. I think it would be a funny way of listing a lot of in-jokes on Uncyclopedia. --neoEva88 19:38, 27 Oct 2005 (UTC) - Uh-oh. Looks like we're about to see a rise in EWI (editing while intoxicated). --Savethemooses 23:10, 27 Oct 2005 (UTC) Wilde: namespace From the Uncyclopedia drinking game: "Start chugging when the page has Oscar and/or Wilde in the title." - Egads! If the Wilde: quotes are still using a variant of the old (and now deprecated, or maybe desecrated) Undictionary: structure, this would be a one-way ticket to the local detox. Unlike the old template:Wilde (which left the quote in the original main article's body text), template:OWQ makes every quotation a separate substub of an article with Wilde: in the title. The Uncyclopedia runneth over with those li'l one-line quotes. Mind you, a robot to pull the template:Wilde text out of articles and generate Wilde:A-Z would have to download every bloody article with that template in place, but a drink for every Wilde: quote in main article space? Time to buy stock in the local brewery! --Carlb 18:13, 29 Oct 2005 (UTC) - I think the Wilde: namespace (or rather, separate pages for each quote) is required to make the embedding using the OWL template work right. There should be standardization done, way too many wilde templates and thingles. Maybe putting the quotes in "Article_Name/Wilde" or such, with a template to show it on the page, and a template to show it in Wilde:A-Z. --Splaka 03:23, 30 Oct 2005 (UTC) Community Portal overhaul IMHO, Uncyclopedia:Community_Portal should be featured more prominently (IE, on the front page). It can be a really useful resource when trying to find a general page (such as QVFD). General Specific 02:00, 27 Oct 2005 (UTC) - It is in the navigation box, below the logo, on every page (In the monobook skin anyways). --Splaka 02:08, 27 Oct 2005 (UTC) - Wow. Never noticed that, I'd always typed it in. General Specific 02:58, 27 Oct 2005 (UTC) - +1 awesome. --Chronarion 06:42, 27 Oct 2005 (UTC) - THE WINNER IS YOU! --Savethemooses 23:29, 27 Oct 2005 (UTC) - User:Insertwackynamehere/Admin/Useful_links is cooler :P heh if you're an admin at least :P --Maj Sir Insertwackynamehere CUN VFH VFP Bur. CMInsertwackynamehere | Talk 01:17, 28 Oct 2005 (UTC) Vandalpedia now open - Could we get an explanation? --Paulgb Talk 23:25, 26 Oct 2005 (UTC) - What the fuck, indeed.--Nytrospawn 23:56, 26 Oct 2005 (UTC) - I'm sorely tempted to systematically disobey all the rules on the main page. --Spintherism 03:19, 27 Oct 2005 (UTC) So uh, this is unofficial, but it's apparently a weird userspace on sportslog. I can't imagine this is a brilliant idea, but it's not mine..--Chronarion 06:33, 27 Oct 2005 (UTC) The Cursed Uncyclopedia Article I'm not doing vandalism here, and I'm not much of a Halloween fan ... but this does seem like a pretty good Halloween joke to me ... Don't read the article, it's cursed, and you might be banned from Uncyclopedia for reading it. In fact, I might be banned from Uncyclopedia for writing it ... but anyway, read Uncyclopedia:Curse and help me make this as scary as possible! (short of actually reading it) And don't delete it please I think somebody's going to come up with something clever to make it really scary for sure! :) Please don't link directly to the page, but to Uncyclopedia:Curse instead please. --Nerd42 17:39, 26 Oct 2005 (UTC) - I don't like it. It's nowhere near amusing, and it reminds me of chain letters, which I hate.--Sir Flammable KUN 18:30, 26 Oct 2005 (UTC) - I agree that it's not very funny, but I stop short of disliking because I would like to promote the idea of an Uncyclopedia Halloween article. IMO, it would only be worth it if it's front-page worthy by Halloween though, which doesn't look very likely. --neoEva88 19:33, 26 Oct 2005 (UTC) - Heres what I would do: Get rid of the article itself, to make it more mysterious. Make the article about the cursed article, but the cursed article doesn't exist. For example, move some of the stuff from the talk page over. You could take a screenshot of the article as it is now, and include it inline with the article. I think it would be more funny that way and it would fit better into uncyclopedia. Personally I like the idea, but the article itself doesn't look cursed to me. (Not that I have seen it. Stay away from my user page with that template) --Paulgb Talk 21:51, 26 Oct 2005 (UTC) - You can be uncursed. Follow the clues. -- Anonymous - Flammable you don't get it at all - it's supposed to remind you of chain letters because it's making fun of chain letters. And of the idiots who read them and send them on! That's the point! Duh! omg you're stupider than ... then ... than me! And that's a good idea paulgb ... except don't delete the page ... that would stop anything more from being added to the legend ... --Nerd42 22:45, 26 Oct 2005 (UTC) - and another thing, you can't be uncursed. It's perminant! --Nerd42 22:46, 26 Oct 2005 (UTC) - yes, and make sure you don't get the Double Whammy ... its part of the original curse --Nerd42 23:02, 26 Oct 2005 (UTC) - i'm not cursed --Paulgb Talk 23:40, 26 Oct 2005 (UTC) I really think if the actual page was gone, then nobody would be getting cursed, thus the danger's gone, thus the mystery's gone, don't you think? --Nerd42 02:55, 27 Oct 2005 (UTC) Uh... I hate to say it, but the humor is sort of lost without a cuebat and the cursed article being that... "Oh my god, you're cursed" is... rather boring over the internet. --Chronarion 06:06, 27 Oct 2005 (UTC) - What the heck is a cuebat? Google has no definitions for it. Hey wait, I know I'll link it. Cuebat.--Nerd42 13:25, 27 Oct 2005 (UTC) - Those stupid Cuebats. Lost us the World Series. Some background: the Houston Astros invented a really cool way to blow off steam after tough games, by combining Pool (I almost typed "poop", hehe) and Baseball into "BasePool". It's really pretty much the same as regular Pool, except with a baseball instead of a cueball, and a modified baseball bat instead of a cuestick. You see, the cork inside the bat was causing the cuebaseball to shoot off the table, so they used "regulation" (wink wink) bats for their BasePool games. Well, after Katrina and Rita hit, all their regular ballboys were in shelters, so the team hired some beachcombers who got evacuated from Galveston, and sure enough, the Bozos got the bats mixed up. The Astros took the field with uncorked bats, and the result was a Houston disaster the likes of which hadn't been seen since Hurricane Enron. - So that's the story of the Cuebat. Hope it helps clear things up. -- SirBobBobBob ! S? [rox!|sux!] 17:25, 27 Oct 2005 (UTC) - Sounds like an Ayn Rand story --Nytrospawn 18:31, 27 Oct 2005 (UTC) The FATE OF UNDICTIONARY will be decided NOW. (Black Wednesday) Judging from this conversation, there are a good number of people who are unhappy with Undictionary. Therefore I propose this plan to cut down Undictionary to only the alphabet pages, which will both reduce the ridiculous number of (mostly lame, in my opinion) individual-paged Undic entries and make the Undictionary easier to handle in the future. - Copy the contents of the existing Undic entry - we will call it Undictionary:Title - to its corresponding alphabet page (Undictionary:T in this case), replacing the {{def|Title}} template that the alphabet pages currently use. - Delete pages that redirect to Undictionary:Title, except for (if it exists) the redirect from [[Title]], which would be changed to redirect to Undictionary:T#Title. - Delete direct links to Undictionary:Title, OR simply change them to point to [[Title]]. - Delete Undictionary:Title itself. - Optional: delete entries on the alphabet pages that aren't dictionary-ish. (But we can decide on this later.) It will be hard and it will not be fun. But I, for one, am willing to start this process in order to reduce Undic to a reasonable, possibly even humorous section of the site. Who's with me? FOR! --—rc (t) 05:44, 26 Oct 2005 (UTC) Foooooooor in anime-style obliviousness.--Sir Flammable KUN 06:00, 26 Oct 2005 (UTC) For. Also, can we fix the Template:Undictionary to point to the correct place, and insert it into a few pre-existing Title pages that match? --Splaka 06:07, 26 Oct 2005 (UTC) - {{Undictionary}} is currently only used twice and I've changed both instances to the new version: {{Undictionary|A}} through {{Undictionary|Z}} depending which of the 26 pages is to be pointed to. --Carlb 19:25, 26 Oct 2005 (UTC) Eh, I'm not an admin, so I can't really help out, apart from posting endless lists of annoying/unfunny undictionary articles to QVFD, but I agree. From what I've seen of undictionary, it just seems to be any short article, without taking the relative merits of said articles into consideration. At least there isn't any 1337 ones that don't have a reason for being that way... --Malleus 11:45, 26 Oct 2005 (UTC) - Actualy the only thing you can't do that admins can is 2. and 4. --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 12:07, 26 Oct 2005 (UTC) For. you do realise you can use subst: to make creating the indivudal pages a lot easier. --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 12:07, 26 Oct 2005 (UTC) - Not convinced - my vote's up for grabs (for what it's worth) Problem: but then every time some n00b writes a clever (but not funny) shortie it'll have to be deleted, not moved. Mere mortals cannot do this ... that takes admin privilages. So this basically looks like it's gonna be more work for admins ... forever am I correct? --Nerd42 12:47, 26 Oct 2005 (UTC) - No, the opposite is true. Mere mortals have never done any significant MTU work and never will. Deleting is much easier. --Spintherism 14:24, 26 Oct 2005 (UTC) - well ... they've tagged articles to be moved ... but yeah I C your point ... no wait a second ... are you saying that "all new pages that are too short for uncyclopedia will be deleted instead of moved to unDictionary and existing ones will be merged into 26 pages?" --Nerd42 14:48, 26 Oct 2005 (UTC) - The content of new short pages can still be moved to Undictionary and the parent page deleted (or left with a redirect, per #2). This will actually make it easier for people to create new entries for Undictionary (once they're aware of the change), as well as easier for admins to maintain the Undic entries. No messy movages and an overall less cluttered site. It's a Good Thing. --—rc (t) 15:56, 26 Oct 2005 (UTC) For To many random hits go to undic, and too much crap is moved to undic instead of deleted. --Sarducci 17:16, 26 Oct 2005 (UTC) - As a slight aside, I think undictionary should be for word definitions, not one/two sentence articles. Things that aren't definitions should be deleted as the Undictionary is collated.--Sarducci 18:28, 26 Oct 2005 (UTC) - Other people have said the same thing. I'm neutral on the issue - as long as Undictionary is confined to just a few pages, I'm happy. It looks like Carlb has already copied the contents of the Undic pages to their respective alphabet pages, so if you want to start a vote about whether to keep non-dictionaryish entries, go ahead. I'll get started on the rest of the maintenance later today if I have time. --—rc (t) 18:37, 26 Oct 2005 (UTC) For x4 This issue is in the forefront. --Savethemooses 17:47, 26 Oct 2005 (UTC) - Comment For (2) and (4) above, looks like the redirect at [[Title]] will need to be clobbered/overwritten with [[Undictionary:Title]], which will then be edited to #REDIRECT [[Undictionary:T#Title]]. Effectively, undo the move that was made when these were originally MTU'ed and then redirect 'em to one of the 26 Undictionary:A-Z pages. That way, edit history is preserved for what little it's worth. I shall have to see if I have a way of automating this as there are likely six hundred or so of the bloody things, all useless... --Carlb 19:25, 26 Oct 2005 (UTC) Foreign No habla Englais General Specific 19:26, 26 Oct 2005 (UTC) For It's about time we took down the anti-quality-control barrier that surronds the Undic. --Sir AlexMW KUN PS FIYC 19:28, 26 Oct 2005 (UTC) Killer robots have been dispatched to redirect Undictionary: entries with extreme prejudice. Achtung! --Carlb 20:17, 26 Oct 2005 (UTC) - Ack, I don't think we ever need Undictionary:Title redirects. They shouldn't need to exist. What I meant above was: For an undictionary entry Undictionary:Title, move the contents to Undictionary:T#Title and delete Undictionary:Title, and then one of the two cases happen: - 1. If Title (the top namespace version of former Undictionary:Title) exists, put in an {{Undictionary|T}} template. - 2. If Title doesn't exist or is a redirect to Undictionary:Title, change that redirect to Undictionary:T#Title. - This prevents double redirects, and allows people to linkify any word in the undictionary and not get a red link. Also, later, people can expand Title to a full article and also add the template. With this system, the Undictionary:Title redirects and entries need never exist again. I think? It seems simpler anyways! --Splaka 20:41, 26 Oct 2005 (UTC) For. Rangeley 20:53, 26 Oct 2005 (UTC) I vote for whatever option involves the most deletion. --Spintherism 23:03, 26 Oct 2005 (UTC) - Very well then, grab some marshmallows, sit back and watch 'em burn... --Carlb 05:31, 27 Oct 2005 (UTC) I say 8, because I'm 4 it twice over. After my abortive attempt to slog through the MTU pages (just 700? yeah, I'll do it over lunch. help! drowning! AAAAAAAAA!), I'm all for massive huffage. "All new pages that are too short for uncyclopedia will be deleted" sounds harsh, but that's the way I like my coffee*. * Disclaimer: I don't drink coffee. -- Sir BobBobBob ! S ? [rox!|sux!] 15:53, 27 Oct 2005 (UTC) I'm beginning to think perhaps some people have a different purpose in mind than others. Some simply want Uncyclopedia to be a place to have fun reading/writing funny articles, and some people seem to think that Uncyclopedia is supposed to be funny as the primary objective, whether people are having fun or not. Though my two (count 'em two) undictionary contributions will no doubt be the first to go (and I don't care because they're stupid anyway) I really think this site is beginning to reach a crossroads ... and it's not as fun anymore ... you know? The whole point of being funny is, or ought to be so people have fun. --Nerd42 16:11, 27 Oct 2005 (UTC) - Uncyclopedia is supposed to be funny. It is both possible and plausible to have fun and write articles at the same time. This does not necessarily mean you can pound out articles like crazy. See ipod nano 200gb for example. I write an article like what, once every three months? Gist of it is, if you can't be funny, I really am not forcing you to stay around and write stuff. The sad reality is that it's less fun to read if half the pages are complete shit. Honestly, it's supposed to be more fun to READ than to WRITE, unless you are rather good, or rather masochistic.--Chronarion 17:51, 27 Oct 2005 (UTC) - Well, there's something to be said for having fun writing. Fortunately, for those for whom the joy of writing doesn't quite match the joy of reading, there are several support groups online. Meanwhile, I suspect there's a Wikipedia Village Pump archive out there somewhere, in which somebody says "it's not as fun anymore..." -- SirBobBobBob ! S? [rox!|sux!] 19:02, 27 Oct 2005 (UTC) - Fun doesnt mean writing short, badly written things. And reading these short, badly written things generally isnt fun either. Writing good articles, and reading good articles is what makes this place great, and unique. This is hardly a crossroads, to delete bad articles or entries is keeping up with the tradition, and something that happens every day. And something that should be continued.Rangeley 19:11, 27 Oct 2005 (UTC) - The thing some writers here fail to realize is that most websites have something like 10,000 reads to every write. I'm not kidding. Even wiki pages are generally read more than they're updated. Therefore, fun in writing is not our target, fun in reading, which most people do, is our target. If it isn't funny to read, it should be burned with great prejudice. Nothing personal, but very few people find modifying stupid articles the height of entertainment. Nerd42, if you find editing crap articles entertaining, please, by all means, DON'T WRITE A NEW ARTICLE EVER AGAIN, just go through the two-point-five metric tons of crap we already have, and make it funny. Seriously, you'd be doing the site a favor. » Brig Sir Dawg | t | v | c » 15:35, 29 Oct 2005 (UTC) FORE! *whack* --Algorithm 09:06, 29 Oct 2005 (UTC) United States of Armenia article Astrokey44 took it upon himself to merge United States of America into United States of Armenia. If this merge is appropriate, then at the very least, this article needs some attention; it has physically overlapping templates, among other problems. Better yet, I'd suggest that the two articles be unmerged; but maybe the decision to merge has already had consensus. --Ogopogo 04:01, 25 Oct 2005 (UTC) - I don't think there has been any consensus. I noticed he was merging some articles but haven't had time to check up on him. Proper merges are badly needed on many articles here, see the Jesii discussion below. (Just a comment) --Splaka 04:06, 25 Oct 2005 (UTC) Some admins better check the quality of this guy's merges and undo one's he screwed up, like this one, or ones he didn't move contants, like mj below. Sarducci 12:24, 25 Oct 2005 (UTC) I agress, I hd just assumed that the merge had been accepted. I don't see any reasone why the merge is appropriate. The United States of America article should be its own article. I do think it would be appropriate to have a United States page that is a navigation page linking to The United States of America, The United States of Armenia, and United States of Germania. --neoEva88 19:40, 25 Oct 2005 (UTC) - And of course The United States of Whatever article, how could I miss a chance for shameless self-promotion? --neoEva88 19:30, 26 Oct 2005 (UTC) So... I decided to take care of this, and made United States A disambiguation page. When trying to un-merge united states of America with US of Armenia, I found no evidence in the history of an article being in the United states of America that was every anything but a redirect. am i missing something? --Sarducci 20:56, 26 Oct 2005 (UTC) Yes, you are. When it was moved to United States of Armenia, so was its history. The redirect was a page created, when it moved, thus creating a new history. Rangeley 23:06, 26 Oct 2005 (UTC) - I was gonna say, the United States of Armenia article seems to be a misplaced Uncyclopedia article on the United States of America. Since the location provides no added humor to the article, I propose we just move it back. --neoEva88 23:10, 26 Oct 2005 (UTC) But misnaming it Armenia creates hours of fun! Yea, I completely agree about the move back to the correct place. Rangeley 23:15, 26 Oct 2005 (UTC) Someone please move it back. --Chronarion 06:01, 27 Oct 2005 (UTC) That requires the current redirect located at United States of America to be deleted. I thought Id be clever and try to move the redirect to United States of Americas, only to find that it made another redirect in its place. Rangeley 19:05, 27 Oct 2005 (UTC) - If you hadn't moved the redirect, just moving United States of Armenia to United States of America might have worked. MediaWiki will let you move an article over top of a redirect, provided the destination is just one simple redirect with no article history and that the move you're making just undoes the previous move. And besides, it shouldn't have been moved to Armenia in the first place but to Albania, the capital of New York State --Carlb 17:09, 29 Oct 2005 (UTC) Article moved back to America. I'm not even going to try to clean it up, though. --Algorithm 08:59, 29 Oct 2005 (UTC) Michael Jackson As well, Astrokey44 merged the Rev. Michael Jackson article into the Michael Jackson article, without transfering the Rev. Michael Jackson contents into the surviving article. I took the liberty of doing it. If a merger is to be done, it should be right, eh? --Ogopogo 04:01, 25 Oct 2005 (UTC) - Yes. Though Rev MJ wasn't particularly good, so well, it's alright. --Chronarion 17:54, 27 Oct 2005 (UTC) Now Hiring Sysops Are you looking to join a team of extremely enthousiastic and sarcastic individuals? Do you like to feel like you are a valuble part of a team? Are you looking for an employer who doesn't care that you are a gay disabled monkey? Do you want to feel that you are doing the right thing? Do you want an employer who doesn't refer to you as his slave if you ask him nicely? If you said "I prefer not to answer" to any or all of these questions, you may be qualified to be a sysop at Uncyclopedia. Requirements - Must be experienced with MediaWiki software (5-6 years) - Must have at least 10 fingers in total (its a bi-law, its beyond our control) - Must like to ban noobs - Patience with noobs - Must use IRC - Must be willing to never again add actual content to Uncyclopedia Benefits (this space intentionally left blank) - Sounds like fun, sysop status for all! --Maj Sir Insertwackynamehere CUN VFH VFP Bur. CMInsertwackynamehere | Talk 02:15, 25 Oct 2005 (UTC) Nominations (I'm sorting through User:Paulgb/Slaves, a list created by User:Splaka and User:Dawg, for nominations. If you would like to nominate yourself or someone else, add a heading here.) General_Specific (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) I don't have a snowball's chance in hell General Specific) Phwaarr</tabloid> --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:13, 28 Oct 2005 (UTC) Some new information about General Specific came to light on IRC: <General_Specific> i can't stay for long <General_Specific> i defeated an irc addiction a couple years ago Apparently he huffed IRC in the past and wishes to stay off it in the future... » Brig Sir Dawg | t | v | c » 21:45, 29 Oct 2005 (UTC) SCANDAL: <Codeine>so what's your manifesto <General_Specific> uh... get elected and milk the system like there's no tomorrow? General Specific - can he be trusted? --—rc (t) 01:34, 30 Oct 2005 (UTC) <General_Specific> i should really stop saying incriminating things on IRC, like that i'm a corrupt, lying scumbag who's going to turn his back on the voters when they need him most DWIII (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Nomination (From /Slaves suggestion) --Paulgb Talk 01:59, 25 Oct 2005 (UTC) What can I say??? I'm honored to be even considered, and floored, and flabbergasted, and many other confusing emotions beginning with "F" over the past several days... if there is anything I can do to help out, I wouldst gladly accept :-) --DWIII 22:31, 30) Floor --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:14, 28 Oct 2005 (UTC) BobBobBob (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Nomination and vote (From /Slaves suggestion) --Paulgb Talk 02:02, 25 Oct 2005 (UTC) Wow, this is ironic. I whine a bit, start a project that I don't finish, make a shameless knockoff of someone else's concept, then drop completely out of the loop for a couple of weeks. If that's what it takes to get root admin, then all I can say is... why the hell not [2]? Oh, and thanks. -- Sir BobBobBob ! S ? [rox!|sux!] 16:03, 27 Oct 2005 (UTC) - Important Cravate: I will follow all rules in the spirit of Uncyclopedia, which of course means that I'll break them if I wanna. In particular, I'm almost certain to break the "new rule" (Must be willing to never again add actual content to Uncyclopedia). I just get inspiration and create something like Blonde or Vienna sausage. If I don't, I'll be back to my old habits, and my parole officer keeps a running count of the neighborhood cats. Also, I'm afraid of IRC... I think it might be hazardous to my job. If I'm not qualified for adminishipishness, I can live with that. There's always QVFD. -- SirBobBobBob ! S? [rox!|sux!] Four --Splaka 20:43, 27 Oct 2005 (UTC) Phfor --Spintherism 22:24,) Fleur --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:15, 28 Oct 2005 (UTC) -phore » Brig Sir Dawg | t | v | c » 14:38, 29 Oct 2005 (UTC) Ogopogo (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Nomination and vote --Splaka 02:42, 25 Oct 2005 (UTC) I'd be interested. Sounds fun. --Ogopogo 01:55, 26 Oct 2005 (UTC) - For. Rangeley 21) - Fee --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:15, 28 Oct 2005 (UTC) - Four » Brig Sir Dawg | t | v | c » 14:38, 29 Oct 2005 (UTC) Rangeley (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Nomination and vote --Splaka 02:42, 25 Oct 2005 (UTC) - I not only accept the nomination, but I have size 10.5 shoes and my favorite German Chancellor would have to be Angela Merkel. Rangeley 16:08, 25 Oct 2005 (UTC) For Seems like a level-headed fellow. --—rc (t) 18:21, 27 Oct 2005 (UTC) For! Banter ftw!--Sir Flammable KUN 21:36,) Foo --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:17, 28 Oct 2005 (UTC) Fore! -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 01:23, 29 Oct 2005 (UTC) -phore » Brig Sir Dawg | t | v | c » 14:38, 29 Oct 2005 (UTC) Spore --Savethemooses 20:37, 29 Oct 2005 (UTC) May the best man win. For General Specific 03:15, 30 Oct 2005 (UTC) Mhaille (Talk • Contribs (del) • Editcount • Block (rem-lst-all) • Logs • Groups) Me, me......I want to be a PsyCop. Mentally boring into the minds of n00bs, revealing their innermost thoughts and fears. Look into my eye, and dis bear! -- Mhaille 09) Fum --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:17, 28 Oct 2005 (UTC) I smell the blood of an Englishman er For --Splaka 10:20, 28 Oct 2005 (UTC) 4 -- Sir Codeine K·H·P·B·M·N·C·U·Bu. · (Harangue) 11:17, 29 Oct 2005 (UTC) General Comments? - Geez, I guess they let just anybody be a sysop now... --Savethemooses 20:47, 25 Oct 2005 (UTC) - Stipulation My vote goes tot he first nominee person to come tothe IRC channel and make witty banter in my general vicinity.--Sir Flammable KUN 20:47, 27 Oct 2005 (UTC) - Comment I seem to lack many of the attributes needed to be an admin (except the last of course but then I fufilled that before becoming an admin) --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 10:19, 28 Oct 2005 (UTC) - Comment' Wow, I've been getting away with not using IRC and actually contributing for seven months... --Savethemooses 20:40, 29 Oct 2005 (UTC) Comprehensive Jesii moved from the Template:Jesii talk page - Since Jesús is on this list, shouldn't Jeez be as well? - should Jesus Ocean be on the list, since it is a place made by black jesus rather than a specific jesus? - If Jesus Ocean stays, shouldn't Jesusland also be here, as another Jesus-related place? - Or even Jesus Land which I just realized is a seperate article for some reason. - Other missing Jesii include: Cowboy jesus, Pirate Ninja Jesus, Stupid Jésus, Ultra Christ, Jesus 2.0, Jesus Christ of Latter-Day Saints, The Big JC - Several of these stink out loud, and should probably just be deleted rather than included. I present them in the interest of completeness. - Should all pages linked to by the Jesii template contain the Jesii template on them? --Sarducci 22:00, 24 Oct 2005 (UTC) - You might bring this up in the Village Dump. But, I for one am annoyed by people adding nonexistant Jesii to the list! A bit anyway. --Splaka 22:16, 24 Oct 2005 (UTC) Having set up the Jesii template, I am of two minds on this: - Firstmost, I think that people seem to be confused as to what "Major" and "Minor" mean. I meant the "Major Jesii" to be ones with a sold two+ pages of information. Not 2 paragraphs. - Secondmost, a number of them are visually odiferous, and should be deleted post-haste. - Thirdleast, putting non-existant Jesii on that template should be a bannable offence. Splaka is banning anyone else who adds a red link to it, since I'm up to my eyeballs in work. - Forthwidth, we should delte/merge a bunch of the Jesii. While this is covered in Secondmost, above, I believe in this tenant strongly enough that it bears repeating. And none of that pussy resurrection shit this time, either. - Fifthmost, we should include all existing Jesii on this template. If only because it so offends the fundamentalists. Sir Famine, Gun ♣ Petition » 01:33, 25 Oct 2005 (UTC) It was the red link to Zombie Jesus that inspired me to start writing crap. I mean, how could there have been no Zombie Jesus? It was too big an oversite. Some do seem redundant, like Lizard Jesus and Raptor Jesus, for ex. --Sarducci 02:09, 25 Oct 2005 (UTC) I'm initating the Jesii Project to cleanup these durn Jesii. See my plans/discussion for phase 1 at Template talk:Jesii --Sarducci 20:45, 25 Oct 2005 (UTC) Puzzle Potato? Is the Uncyclopedia logo really a puzzle potato? I always thought it was supposed to be a rotten egg! Which is it? --Nerd42 13:28, 24 Oct 2005 (UTC) - I thought it was just a d Wikipedia globe. neoEva88 14:08, 24 Oct 2005 (UTC) - Potato. --Spintherism 14:48, 24 Oct 2005 (UTC) - Potato globe.--Sir Flammable KUN 16:10, 24 Oct 2005 (UTC) - I propose we change it to a flashing sign that says "Nerd42 is a n00b!" --Savethemooses 16:21, 24 Oct 2005 (UTC) - Its actually a puzzle monkey brain --Nytrospawn 16:48, 24 Oct 2005 (UTC) I made it from the hollowed-out skull of my enemy. Er, I mean it's a potato. This was the old logo. --—rc (t) 17:47, 24 Oct 2005 (UTC) - No seriously, until a few minutes b4 I posted this, I seriously thought it was a rotten egg, and the Wikipedia logo was a regular egg or something --Nerd42 18:04, 24 Oct 2005 (UTC) - Well, you're not the first to think it was an egg. The Uncyc logo, I mean. Dunno about the Wikipedia one. --—rc (t) 18:09, 24 Oct 2005 (UTC) - well, I sorta knew the wikipedia logo was a sphere ... but kind of compared it to an egg, and the uncyclopedia logo made me think it's designer(s) must have made he same comparison. The idea of a puzzle potato hadn't even enered into my brain until this point. What the heck is a puzzle potato anyway? Somebody write an article about it! --Nerd42 19:07, 24 Oct 2005 (UTC) - The Wikipedia logo is a sphere made of puzzle pieces. It is simply a logo, it doesnt MEAN anything per se, although I always interpreted it as a World being built one piece at a time, which is metaphorical in a way to its purpose. Uncyclopedia being a parody of Wikipedia, that chose Potatos as one of its defining characteristics resulted in the creation of a potato being pieced together by puzzle pieces. It is a very nice photoshop and a great parody but thats why it was created, just to explain it :05, 24 Oct 2005 (UTC) - OMG a wmd article about potatoes, that is TOO FUNNY! --Savethemooses 23:03, 24 Oct 2005 (UTC) - Haha, you're so right! Nobody else would have ever thought of something funny like that. Ok, how about something about how george bush (his name is like pubic hair, lol!!!1) loses his potatoes he was going to give to the secretaries, and then he thinks saddam did it, and he starts a war, but he still can't find the donutspotatoes! That sort of article is just what uncyclopedia doesn't have enought of. --Spintherism 23:26, 24 Oct 2005 (UTC) 500,000 It took us about half the time to get from 400k to 500k as it took to get from 300k to 400k, thanks in no small part to Slashdot. The Slashdottings were also probably one of the biggest reasons the other stats look rather anemic - even though we got something like 30k front-page hits the first time /. gave us a link, hardly any edits came from that (at least immediately) because the danged site was down during the deluge. I am pleased to see a significant increase in "Pants" usernames this time around. Note also that the figure does not include the users "Pantera" and "Panther." --—rc (t) 03:19, 24 Oct 2005 (UTC) - And of course we've deleted a load of pages (close to 2000, I believe) in the past week as well. --—rc (t) 03:25, 24 Oct 2005 (UTC) - Dunno if there were thousands of deletions, but certainly hundreds have been removed in the last half-week or so: largely pointless one-liners. Redirecting all of the (hundreds of) individual Undictionary entries to twenty-six main ick!tionary pages has put a dent in this mess too. We're down to 15140 articles today, from the 15834 listed for the 23rd. The Undictionary mess was huffed and burninated on the 26th, leaving just a few embers and ashes which still need to be swept away. We still have many small Uncyclopedia fragments because Wilde: is still on the old Undictionary: structure or a variant thereof, Zork/The game also generate a mass of short substubs by nature. I'd estimate easily 2000 "articles" of 300 bytes or less remain? --Carlb 16:28, 29 Oct 2005 (UTC) - Between 08:26, 19 Oct 2005 (PDT) and 14:37, 27 Oct 2005 (PDT) (From Chron's first burn-week delete to me removing the template 8+ days later) there were 3179 deletes. About a thousand of those were bits of the undictionary. There have been about a thousand more since then. Burn on! --Splaka 05:22, 30 Oct 2005 (UTC) Giving up an article for adoption My brain gave birth to the article Romance language the other night. I am offering this article for adoption, as I am unable to care for it. The prospective parent(s) should know Portuguese, French, Spanish, Italian, or Romanian and should be able to add surreal dialogue in one of these languages to the article. An ability to care for articles with linguistic technical needs is a plus. Please care for this article. --KP CUN 03:14, 24 Oct 2005 (UTC) Opinions on UnNews:Black_iPods_rob_owners? What do you guys think of this article? IMHO it's my best article so far (which says a lot about the quality of my articles, or more accurately lack thereof). Anyone think that with some polishing and extra content it would deserve a nomination? General Specific 23:51, 23 Oct 2005 (UTC) - Nice work! Definitely a candidate for an UnPulitzer... --Savethemooses 00:10, 24 Oct 2005 (UTC) - I added a little bit to the end. --Spintherism 02:24, 24 Oct 2005 (UTC) I wrote a new article. It is called iPod yocto. It's my first one in a while. Please view it, and leave some feedback here and/or on teh talk page. Thanks! --Savethemooses 22:51, 23 Oct 2005 (UTC) - Hundreds of articles are created each week on uncyclopedia. If you want to highlight yours, perhaps edit the Recent Articles page to add a link to your article? --Ogopogo 00:21, 24 Oct 2005 (UTC) - I did do that as well already. There may be hundreds of new articles every week, but there are only a few great ones per month (more specifically, mine). By the way, you've only been here since what, August? Go fetch me the sports page and a cup 'o' joe, n00b. --Savethemooses 00:29, 24 Oct 2005 (UTC) - It's ok Ogopogo, most people don't know his true identity. --Spintherism 02:11, 24 Oct 2005 (UTC) - Welcome to the Uncyclopedia, Savethemooses. Please check out the Beginner's Guide before making your first edit. You can also check out Help:Contents if you have never edited a wiki before. -- hehehe, welcome back Savethemooses. You should join us in #uncyclopedia --Paulgb Talk 21:29, 25 Oct 2005 (UTC) anyone can screw up Sorry to start two new topics in a row ... but I posted this on Talk:Main Page and nobody has responded yet ... Right now, the Main Page reads, "Uncyclopedia, the content-free encyclopedia that anyone can edit." In my opinion, that is not funny. It ought to read, "Uncyclopedia; the content-free encyclopedia that anyone can screw up." Much more elegant, and funnier too! Whaddya think? Nerd42 19:09, 23 Oct 2005 (UTC) - VETO - I do have this power, right? The current one is a direct parody of Wikipedia. » Brig Sir Dawg | t | v | c » 19:26, 23 Oct 2005 (UTC) - Double Veto! I think I should eat you. This would make life better. Whaddya think? --PantsMacKenzie 19:36, 23 Oct 2005 (UTC) - I veto Pants' veto, lest it inadvertently veto Dawg's veto thereby vetoing the the entire veto process outright, and leaving us all cold and vetoless in the dark of the night, which simply wouldn't do at all. --Spintherism 19:59, 23 Oct 2005 (UTC) - OK fine whatever. I just thought that'd be funnier Nerd42 20:11, 23 Oct 2005 (UTC) - Triple veto. Uh, totally not. It's a direct parody of wikipedia ATM. --Chronarion 18:39, 24 Oct 2005 (UTC) - I'm not keen about encouraging articles to be "screwed up". We already have enough juveniles and vandals screwing up existing articles with cruft or starting new cruft articles. "Edit" is a good term and should remain, as it suggests a certain amount of thought going into the edit.--Ogopogo 20:18, 23 Oct 2005 (UTC) - Well, the more people screw up Uncyclopedia, the more we get to use our bansticks. So I'm split on my vote. While I like banning people, I don't like fixing the stuff they broke. I guess I vote for chocolate cookies. And tradition. And I vote to overturn an even number of vetos. Sir Famine, Gun ♣ Petition » 20:25, 23 Oct 2005 (UTC) - Against - If we are going to change the subtitle (what is it really called?), we should go for something altogether new. Even a contest or something. We rotate everything else on the Main Page regularly, why not the subtitle? The one thats there now works because it is a direct parody of Wikipedia. The screw-up headline works as a parody as well, but it loses some of its funny points (IMO) because its true. --Paulgb Talk 20:55, 23 Oct 2005 (UTC) - triple-against I do say! --Savethemooses 21:23, 23 Oct 2005 (UTC) Well, what if it just said something else different? Like ... hmm ... "... the content-free encyclopedia that anyone can revert"? --Nerd42 22:26, 23 Oct 2005 (UTC) - Umm.. no There's a point here and you're missing it. Maybe it's embedded in your cerebral matter?--Sir Flammable KUN 22:28, 23 Oct 2005 (UTC) - Actually, we used to have "the content-free encyclopædia that monkeys can edit" up there. --Carlb 00:28, 24 Oct 2005 (UTC) - Pocket Veto General Specific 01:11, 24 Oct 2005 (UTC) I veto this, then veto my veto, then I veto that veto. Do I win or lose? --EvilZak 03:16, 24 Oct 2005 (UTC) - Neither, its a draw. --Nytrospawn 03:53, 24 Oct 2005 (UTC) Just a suggestion: how about extending the model. For example, "Welcome to Uncyclopedia, the content-free encyclopedia that anyone can edit, someone will deface, and nobody can fix!" -- Sir BobBobBob ! S ? [rox!|sux!] 17:50, 28 Oct 2005 (UTC) - How about we rotate through some ideas? Like when those IWETHEY people blanked the IWETHEY page and considered Uncyclopedia to be full of lies. "Uncyclopedia, the Encyclopedia full of lies according to IWETHEY, that anyone can blank." or "Uncyclopedia, the Encyclopedia full of bullshit according to Liberal Terrorists, that anyone can dispute." or "Uncyclopedia, the Encyclopedia full of corn according to The Children of the Corn, that anyone can shuck." or "Uncyclopedia, the Encyclopedia full of beer according to drunks, that anyone can drink." or "Uncyclopedia, the Encyclopedia full of cabbages according to Rambo, that anyone can shoot." --Loke 22:59, 29 Oct 2005 (UTC) Template:Uncyclopedia Was I supposed to get approval/permission for doing this? Anyway, it seemed like a good idea at the time. Seems to me if Wikipedia is going through all this trouble to ignore us, the least we can do is ignore them back! Nerd42 18:59, 23 Oct 2005 (UTC) - I love it! Now, we need more cowbell (obvious links to Wikipedia) and to strive to put this on EVERY SHIT PAGE ON THE SITE. Make them deal with the morons instead of us. » Brig Sir Dawg | t | v | c » 19:26, 23 Oct 2005 (UTC) ED's article about Uncyclopedia Here's an excerpt: Example of arena-silencing anti-ror 2005 - Steve Ballmer discovers a new element, Ballmerium, which can transmit through the Internets and kill people distantly. Good luck for Ballmer, bad luck for everyone at his hitlist. Death by a flying chair is painful. Can someone explain me what "arena-silencing anti-ror" means? - Guest 16:23, 23 Oct 2005 (UTC) - it meanz teh LJ lulz are pwning teh Ballmer, omglol. --Savethemooses 16:54, 23 Oct 2005 (UTC) w3 g0t pwnz0r3d --Nytrospawn 20:19, 23 Oct 2005 (UTC) The Opposite of Image Request I've had some awesome images on my computer for quite some time but I can't seem to find a good page to put them in. What if we had a page that was like the opposite of image requests, it could be something like, "images that need a home". I think it's a good idea. If we already have such a page just provide a link and I'll apologize for being such a moron. ~ Jared ~ 14:43, 23 Oct 2005 (UTC) - I think we already have an orphaned images section.--Sir Flammable KUN 16:06, 23 Oct 2005 (UTC) - Mmk, I haven't been able to find the section. Could you provide a link? And of course, I apologize for being such a moron, as I said I would. ~ Jared ~ 16:46, 23 Oct 2005 (UTC) - Is this it? --Sir AlexMW KUN PS FIYC 16:57, 23 Oct 2005 (UTC) - This page is a continuation as the first one's already rather full. --Carlb 23:07, 24 Oct 2005 (UTC) How the hell do I add images? I'm trying to add a image, but what the hell does the system mean with "image name on server"? What server? Can I add a image from a internet page? Does it have to come from my PC? Is it stored somewhere in uncyclopedia after that? - You have to create a username (click 'create user' at the top). When you do, you'll have a link in your tool box called "Upload file". This ability is usually restrited to users with profiles on all Wikis. --Splaka 11:38, 23 Oct 2005 (UTC) Ban me Come on! Ban me. Quickly. I know you can do it. Assholes. - ! User:Toytoy; 08:30 . . Dawg (Talk) - ! User talk:Flammable KUN; 08:28 . . Flammable (Talk) (¡÷What are you doing?) - ! User talk:Flammable KUN; 08:26 . . Toytoy (Talk) - m ! User:Toytoy; 08:25 . . Flammable (Talk) (It's not unoffending enough. I can still taste the smell.) - ! Valerie Flame; 08:20 . . Toytoy (Talk) - 08:20 . . Flammable (Talk) (deleted "Image:Flame and Wilson.jpg": If this image was any uglier, I'd have to kill it to save humanity. Find a better one.) - 08:19 . . Toytoy (Talk) (uploaded "Image:Flame and Wilson.jpg") - 08:17 . . Flammable (Talk) (deleted "Image:Flame and Wilson.jpg": If this image was any uglier, I'd have to kill it to save humanity.) - 08:14 . . Toytoy (Talk) (uploaded "Image:Flame and Wilson.jpg") - m ! User:Toytoy; 08:11 . . Flammable (Talk) (Your user page offends me.) - 08:03 . . Flammable (Talk) (deleted "Woman of Mass Destruction": Weak, poor, inedible.) - N ! Judas Miller; 07:59 . . Toytoy (Talk) - 07:47 . . Splaka (Talk) (deleted "Valerie Plame": content was: '#REDIRECT Valerie Flame') - 07:47 . . Splaka (Talk) (deleted "Judy Miller": content was: '#REDIRECT Judas Miller') - 07:47 . . Splaka (Talk) (deleted "Judith Miller": content was: '#REDIRECT Judas Miller') - N ! Valerie Flame; 07:43 . . Toytoy (Talk) Go ahead and make my day. -- Toytoy 08:37, 23 Oct 2005 (UTC) - I defaced User:Dawg and User:Flammable KUN ban me. It's fucking pure vandalism. -- Toytoy 08:46, 23 Oct 2005 (UTC) - 08:49 Flammable (Talk) (blocked "User:Toytoy" with an expiry time of 5 minutes: Yay!) - 08:55 Dawg (Talk) (blocked "User:Toytoy" with an expiry time of 300 seconds: RAPIST!) - 08:56 Flammable (Talk) (blocked "User:Toytoy" with an expiry time of 5 minutes: Oop!) What the hell is going on? --Nytrospawn 04:18, 24 Oct 2005 (UTC) I disapprove of the handling of this incident. I do not see why you messed with toytoy's userpage?--Chronarion 18:45, 24 Oct 2005 (UTC) - I was joking on the template adds (Fixed and "air freshener".) Seeing as he was a member for a while, I decided to add a template to his atrocity of a user page. (Let me spam my otherwise worthless radnomized template on my page to relieve the tnesion that I have about not being able to spam it elsewhere). I also got around to deleting several weak, sparsely connected, articles I found to be worthless. Apparently, he took grave offense to this, opting to blank my page (and dawg's page as well for his contribution to the "air freshener" template). Now, if he can't take a joke tempalte or two on his user page (or, more reasonably, remove it and politely ask for it not to happen again, at the very least), that's his problem. Blanking admin pages in response (note:poor, juvenile judgment on HIS part) makes it our problem. By virtue of the fact that he could not be remotely mature about it (Retributive blanking, not giving reasons to revert/restore his work and taking illegal actions, calling for a self ban and harassing us in the IRC channel) I don't think he needs to be a contributing member. Furthermore, as it stands, I have no remorse, seeing as his edits have been varible in quality. His photoshopping was interesting; but for the most part, his content adds, page originals, and other contributions could only be interpreted as assinine (by virtue of its reliance on bias/viewpoint) and cliche (by virtue of its reliance on nonsense, nonsequitur, and an overall lack of wit and finesse). Splaka has had to admonish him on page moves and redirects, and his wholesale movement of "Iraq" (an article about a nation) to "iRaq" was done without admin okay, AFAIK, or any real, logical rationale. At some point, _Dawg and I were being funny about bans, as we were rapidly reaching "stupid o'clock." He took it badly, and acted immaturely. We decided to run with it.--Sir Flammable KUN 19:58, 24 Oct 2005 (UTC) - The contributions to his all-template user page were a group efforts, after he egged the admins on, probably not realizing that it was late and we were tired from chasing vandals all day. I helped because my sarcasm gland is overdeveloped and I was about 1% in reality at the time. From my perspective, the majority of his "contributions" fell into the following categories: stupid political statements, template abuse akin to Ballmer/Kayne Quote Spam, and totally random template abuse. I liked his photoshops (the one was *not* home page material, sadly, unlike many of his others that would have had a chance)... - What I can't figure out is why he thought we wanted to ban him - he pokes at everyone, fools with numerous pages, moves things willy-nilly, and creates tons of redirects - yet nobody wanted to ban him. The suprising thing was that he couldn't take the same jokes he used on other pages when they were directed at his user page (easily fixed, I might add). Nobody wanted to ban him, even when he asked us to do so (repeatedly), then he vandalized to give us a reason. We thought the request was funny, so we made a few joking bans. Then he vandalized some more and begged for permanent banishment, but we liked him enough to go on for almost an hour making silly bans... *shrug* » Brig Sir Dawg | t | v | c » 21:14, 24 Oct 2005 (UTC) 4004 BC It is known as a fact that the world was created on TODAY, 4004 BC. Uncyclopedia seems to forget it. What a shame! -- Toytoy 03:36, 23 Oct 2005 (UTC) Thats because its a fact. Rangeley 03:43, 23 Oct 2005 (UTC) Where are we now? During the few hours uncyclopedia was down today (for me, atleast) I went searching around to see if any site said why. This is when I stumbled upon this site. Its basically an essay on Uncylopedia, written in March, that discusses where things might head. I must say that I think things are heading towards being more funny, and organised, rather then total chaos. It was also noted in the essay that Uncyclopedia had an 'American College Liberal slant.' I guess all of this is something to think about. So how would you guys say things have changed since March? Rangeley 02:40, 22 Oct 2005 (UTC) - Well, we hired some kickass admins, and they started banning the hell out of people and exterminating whole species of articles. All in all, we're doing pretty well, I'd say. Oh, and I think we've added an article or two since then too. Btw, I think you'll want to bookmark this site. Sir Famine, Gun ♣ Petition » 03:05, 22 Oct 2005 (UTC) - It used to have an American college stonerliberal slant, then the British discovered it and started using humour and not just humor and since then everything's sucked but in a much more literary way. HTH! - David Gerard 14:28, 22 Oct 2005 (UTC) - The American college liberal slant was suggested by me in a March blog entry due to the contrast between the pages on George W. Bush and Bill Clinton and also the Daily-Show-esque feel to much of the humour on this site. Nevertheless, Uncyclopedia has been taken over by the British and Romanians ever since. --stillwaters/Talk 17:23, 22 Oct 2005 (UTC) - The slant may also be an effect of the administration. It appears that every administrator is either college-educated, a college student, or in a magnet high school (might as well be a junior college) and will almost certainly go to college. Probably an effect of immersion and force-fed propaganda (as I recall), which hopefully adds frustration-induced academic-style parody of current events, science, mathematics, history, and literature. For instance, when one of the admins saw Rasputin, they took pity on it, commenting that the real story was funnier than the one in the article... - We're not the only ones - Wikipedia's official "neutral" stance is also left-leaning, in my humble opinion. One of their recent Featured Articles was a clear example of their left slant. It was based on current events and the leftist belief that the choices and mentalities of average people are wrong and past systems that worked but are no longer PC or in vogue are worthless and worthy of ridicule. - Around here I'm certainly annoyed by the left/right arguments (childish), the racial/theological hatred (childish), scientific bigotry, and other senseless stupidity (who let these people on the internet?!). Thanks to the handy VFD, QVFD, and my newly-acquired [huff] button, the painfully childish ones usually go to the bit bucket. - The slant against Bush is merely a function of current events - currently elected officials, fresh in the eye of the media (which is very clearly leftist and affects the mentalities of normal people), will be of more interest than past elected officials. This means more anti-Bush rhetoric. Besides, he and his appointees are very easy to parody, often all you need to do is take their statements out of context (the media is also known for doing this, except in the interest of furthering their collective agenda, just look at Fox News, CNN, NY Times, or any other major news source for examples (BBC is the rare exception - they're as close to neutral as a major news organization can get, IMHO)). - Personally, my writing is a cross between a parody of American encyclopedia entries (of which I have read literally thousands), British humour/spelling, Commonwealth terminology (Canadian/Australian), serious ideas, and the English I took in college. It would be no surprise to me if they reflect these things. - I apologize for my extended commentary on the subject - my internet connection went down and I had plenty of time to write it while waiting for it to return. » Brig Sir Dawg | t | v | c » 00:44, 23 Oct 2005 (UTC) I like Uncyclopedia --Nytrospawn 02:25, 23 Oct 2005 (UTC) - It tastes like pie. I know. I've tried it. --PantsMacKenzie 04:53, 23 Oct 2005 (UTC) - Give it up for admins in magnet schools! heh my school is a public school in a northern city so most people are left or neutral, with a few people to the right. However people I know who have backed Bush for years are even started to question him. Seriously, everything and I mean everything is falling apart for Bush right now. One of my friends is extremely right and HE's saying Bush is a moron. I thought it would never happen O_O so calling Bush an idiot these days isnt a political statement its just an obvious one :P But yeah, by default Uncyclopedia is left. This isnt the kind of thing I could see a right politician laughing about :P actually I cant really picture any politician laughing in the first place but w/e (uncyclopedia.left++;) - 22:21, 23 Oct 2005 (UTC) Id Agree about the liberal slant. Well, let me put it this way atleast, the articles you would expect to have a liberal slant do, the articles that youd expect to have a conservative slant dont. Most articles about liberal people are plain idiotic, rather then satirical, while most articles on conservatives are satirical. Ive been working bit by bit to change this, but Ive actually had quite a bit going on lately so my mind has been quite split up. But yea, people are questioning Bush, and things are going bad for him lately. A combination of many factors really. Nothing is going on that would make me abandon my ideals and change to a liberal, though the Republican Party does seem to be having some management issues. But suprisingly (or not), people trust/like the democrats even less. So I dont really see this translating into anything, people seem to be fed up overall with politics and politicans, and I wouldnt blame them. Partisanship is at its highest out of any non election year that I can remember, and people are sick and tired of it. Rangeley 03:08, 24 Oct 2005 (UTC) - Thats why Ive always voted for the Wigs --Nytrospawn 04:20, 24 Oct 2005 (UTC) - I hear you, Rangeley. I don't want to admit what I did last cycle... I find both sides lacking (only slightly less on the right side, but that's just because they're closer to center than the left) and I always vote libertarian (my personal political stance is impossible to define - I'm a cross between a social libertarian and an anarcho-communist, but I strive to be neutral around here) . » Brig Sir Dawg | t | v | c » 06:42, 24 Oct 2005 (UTC) How do I create a new article? You put your article in brackets like this: [[ArticleName]] and click on the link. Or you replace the words Uncyclopedia:Village_Dump in your address bar above with your page name, and go there. Sir Famine, Gun ♣ Petition » 22:31, 21 Oct 2005 (UTC) - You can also just type it in the search bar. If it doesn't exist, then you'll be given a link to create a page with the same title as whatever you typed in (case sensitive). This'll also help to avoid making pages that already exist in a similar form.--Jordanus 02:32, 22 Oct 2005 (UTC) All those lonely images, where do they come from? There are 176 new homeless puppies to be destroyed or rehoused. This time, unless someone can come up with an automated way of doing it, I shall not be alerting the original uploaders of their media's peril, since there is a metric tonne of puppy-flesh to wade through. Yours in hairy-nipples -- IMBJR 18:57, 20 Oct 2005 (UTC) - I'd bet some of them came from here. It seems like their entire village has recently been burned. Put them out of our misery with vigor. Sir Famine, Gun ♣ Petition » 19:20, 20 Oct 2005 (UTC) - Can I burninate all of them? Because I'm going to burninate all of them. Look out. --Savethemooses 20:06, 20 Oct 2005 (UTC) Ok, here's my criteria for burninating images: 1. User has been idle for about a month or more. 2. Image is (obviously) unused and is crappy 3. Any combination of the two. --Savethemooses 20:22, 20 Oct 2005 (UTC) - It is kinda spooky that images can't be undeleted. Someone might wanna back up the uncyc image database (this thing? seems to be). Also: be sure to check for Template:Notorphan --Splaka 20:35, 20 Oct 2005 (UTC) - Yes, but we're not an image hosting site. Those who rely on us for an external link should probably consider going to putfile or photobucket or something. --Savethemooses 20:42, 20 Oct 2005 (UTC) - You cant even hotlink images from here so whats the point? - 20:53, 20 Oct 2005 (UTC) - Notorphan doesn't indicate external linkitude. There are circumstances in which images will appear orphaned even though they are not. 1) Image is only in a <choose> randomizer 2) Image is only linked via text link [[:Image:Whatever.jpg]] 3) Image is only "externally" linked inside uncyclopedia (though this isn't used much, is more of a hack to turn images into links to a page other than the image description page). Please don't delete any such (though they should be tagged with Notorphan) --Splaka 21:14, 20 Oct 2005 (UTC) A highly randomized template - Usage: {{BushLinks|var1=variable }} - Note: var1is not always used and it appears almost everywhere. I created a BushLinks template which currently is placed in the George W. Bush article. -- Toytoy 12:28, 20 Oct 2005 (UTC) Idle Admins Ok, I got bored and made a graph of all the admins, when they were +sysopped, and when they last edited (sorted by last edit). The # symbol indicates timeperiod between their +s and their last edit. Note that ERTW has made no edits since +s. Also note for those +s directly to the config are assumed to be from January (which they are not). --Splaka 11:27, 20 Oct 2005 (UTC) Jan Oct Gadgeophile---------|-----------------##### Uvula_Donor---------|--------------########### Machinecurse--------|----------------############# Root----------------|############################## Euniana-------------|############################## SilentConsole-------|--#################################### Dave----------------|-------------------------------######### Metaphysical--------|-------------################################# Marcos_Malo---------|-----------------------------------------------#### Sophia--------------|---------------------------------################### ERTW----------------|---------------------------------------------------- Bonalaw-------------|-------------########################################## Jasonr--------------|####################################################### TheTris-------------|-------------############################################ Algorithm-----------|--------------------###################################### Angela--------------|########################################################## Famine--------------|-----------------------------############################# Stillwaters---------|------------############################################## Chronarion----------|########################################################## Codeine-------------|----------------------------------------------############ CRUSHERBOT----------|---------------------------------------------------------# Darkdan-------------|-------------############################################# David_Gerard--------|-----------------------################################### Elvis---------------|-----------------------------############################# EvilZak-------------|---------------------------------######################### Flammable-----------|-------------------####################################### IMBJR---------------|------------------------------------------################ Insertwackynamehere-|-----------------------------------------------------##### KP------------------|-----------------------------------------------------##### Nytrospawn----------|-------------------####################################### PantsMacKenzie------|--######################################################## Paulgb--------------|---------------########################################### Rcmurphy------------|-------------------####################################### Savethemooses-------|---------------------------############################### Spintherism---------|------------------------------------------------------#### Carlb---------------|---------------------------################################ Dawg----------------|----------------------------------------------------------# Splaka--------------|------------------------------------------------------##### - Euniana and stillwaters are used by the same person. Sophia and Root are Chron's sockpuppets. --stillwaters/Talk 17:09, 20 Oct 2005 (UTC) A question on deletion I was wondering, is there a limit to the number of things that we should submit to the VFD page, or a limit we should stick to if we don't want everyone to hate us? Reason is, I've been coming across articles that I would've submitted all day, but I've been holding off putting them there in case an Admin or somesuch rants at me for being to quick to want to delete things... --Malleus 06:41, 20 Oct 2005 (UTC) - I always appreciate help - six/eight eyes are better than my four. If we say no, it's generally consensus (seriously) because it was something I put to a vote with the other admins on duty at the time. The one was cleaned up and not deleted because another admin saw promise in it. The others went directly to the great bit bucket after a quick glance. Any completely mindless crap with no redeeming quality whatsoever should go in QuickVFD, and at least 9 out of 10 times it'll be immediately huffed without question. » Brig Sir Dawg | t | v | c » 07:50, 20 Oct 2005 (UTC) Awesome, thanks for thr update. --Malleus 08:59, 20 Oct 2005 (UTC) heh I sometimes just go through newpages looking for QVFD crap :14, 20 Oct 2005 (UTC) Ooh, sounds like fun! *Insert evil laughter* Wow, there is an awful lot of these newly submitted pages that I'm just dying to put on QVFD. --Malleus 11:09, 21 Oct 2005 (UTC) - Sorry for the delay in deleting today. I've been busy working on top-secret projects, speculating on the end of the internet, and generally doing other stuff. I have huffed most of what you've placed on QVFD and VFD, but as I said on User_talk:Malleus, certain ones keep growing back. » Brig Sir Dawg | t | v | c » 11:56, 21 Oct 2005 (UTC) - Top secret projects and the end of the internet? Sounds ominous. A few more things, is there always this much stuff that deserves to be insto-huffed, and what exactly causes the articles to regrow? Do the authors just re-post 'em, or do they rig something up to put them back whenever they get deleted? --Malleus 14:18, 22 Oct 2005 (UTC) - We're known for top-secret projects. The apocalypse almost happened Thursday night/Friday morning (mattering on your coast) when Level 3 dropped off the internet. - To my knowledge they repost them. The rate at which they are reposted is a function of how often someone manually attempts to post them. To my knowledge there is no automated process for blanking and creation of deleted wiki articles. There is usually a lot of stuff that needs to be insto-huffed, but our strong belief in pseudo-democracy requires that we vote on most things and often consult others on QVFD entries. At the moment, as we are under martial law, we're allowed to insto-huff pretty much anything we deem truly QVFD-worthy. VFD only takes 2-3 votes, mattering on our moods, and anything I feel needs more votes goes before a tribunal comprised of admins immediately before the execution. They seldom survive. - I hope this clears everything up for you. » Brig Sir Dawg | t | v | c » 19:14, 22 Oct 2005 (UTC) How to get Blocked for a Long Time Blanking bits of active pages ... repeatedly ... will be met with a ban of 10secondsittakestofix X arbitrarilydefinedannoyance. -- » Brig Sir Dawg | t | v | c » 17:21, 19 Oct 2005 (UTC) - I see you're already making good use of the banenate button --Paulgb Talk 20:30, 19 Oct 2005 (UTC) Template:MTU and Template:MTUsign I intend to deal with all the articles with the MTU template on, then redirect MTU to MTUsign, are there any problems with this? --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 12:24, 19 Oct 2005 (UTC) - Uuhh go for it--Chronarion 15:20, 19 Oct 2005 (UTC) - I wholeheartedly support this action. It takes too long to huff entries containing a plain old MTU. » Brig Sir Dawg | t | v | c » 15:46, 19 Oct 2005 (UTC) - Hmm, it might be better to put in a warning message like "This tag has been depreciated, use MTUsign instead? Or, maybe there is some clever way using subst and date variables to make it timestamp? I do support un-useage of MTU though. --Splaka 19:20, 19 Oct 2005 (UTC) - Not that I could work out (the subst thing) I tried for a while but it stumped me, --The Right Honourable Maj Sir Elvis UmP KUN FIC MDA VFH Bur. CM and bars UGM F@H (Petition) 13:06, 20 Oct 2005 (UTC) - Hmm, I just tried that too (and a bunch of other tricks last night, bah). No joy. It might not be possible. BTW: That is a clever trick, letting you insert raw code by making it your sig ^_^. --Splaka 19:46, 20 Oct 2005 (UTC) PS: note to self (or elvis) m:Help:Template#Noinclude_and_includeonly might work. But I asked on #wikipedia, and what we want is kind of impossible. You can't insert code into an article without subst: or ~~~~~ being part of the text. Template:MFU Am wondering if the MFU template (move from undic) is also deprecated and useless? Presumably this was intended to be placed on individual Undictionary:Title entries, but under the current structure the only way to use this template is to plop it as a big, ugly box into the middle of one of the main Undictionary:A-Z pages... not necessarily desirable. Any way to salvage this template? or should those wanting to move stuff out of Undic just do so and forget the template entirely? --Carlb 18:38, 29 Oct 2005 (UTC) Rating System I am considering writing a rating system for MediaWiki in order to make Uncyc easier to browse. Is this a good idea? Such a system would be implemented like bash.org, where articles start at zero, and get scored by anonymous and logged in users with a single + or - point. --Chronarion 07:09, 19 Oct 2005 (UTC) - Hmm. I'm not sure. It depends on how it's implemented. As in, if it's not prominently displayed on the article, I'd be ok with it. And if the voting system doesn't detract from the article as well. --PantsMacKenzie 07:14, 19 Oct 2005 (UTC) - I concur with Pants. We could just try it out and get rid of it if it's too much of a distraction. --—rc (t) 07:17, 19 Oct 2005 (UTC) - Sounds like a good idea on the surface, but I could also see some opportunity for foul play. If it required registration it might be more effective. » Brig Sir Dawg | t | v | c » 07:22, 19 Oct 2005 (UTC) - I actually would not mind if it was prominently displayed? I fail to see why it would be bad to be openly scored. As for registration, I think that it should be 1 vote per ip per article. Registration doesn't make sense because I want ALOT of votes as opposed to a few specific ones. --Chronarion 15:22, 19 Oct 2005 (UTC) - A lot of votes is one thing, but I have the feeling that some Romanian with a proxy or a dynamic IP would give 1000 upvotes to all the Romania related articles and 1000 downvotes to all the Estonia and Lithuania and Latvia related articles. If there's some way of dealing with that type of thing, then I fully approve of this idea. --EvilZak 17:02, 19 Oct 2005 (UTC) - If the article itself can stay alive, I think the score should be just fine. --Chronarion 05:48, 20 Oct 2005 (UTC) My bet is that with such a rating system, the most highly rated articles will be thinly disguised racist articles, random-humour articles, and GamesFAQ-related articles. Just a hunch, though. --Ogopogo 07:19, 19 Oct 2005 (UTC) - Racist articles, random-humour articles, and GamesFAQ-related articles? That's good. At least it saves us time to uncover these hidden problems. -- Toytoy 09:11, 19 Oct 2005 (UTC) - I really doubt that. Look at 'popularpages'. Is that true for the popular ones? Not really. --Chronarion 15:19, 19 Oct 2005 (UTC) - Special:Popularpages. --Splaka 09:23, 19 Oct 2005 (UTC) - Special:Validate is exactly what you want and it's already in MediaWiki. See Wikipedia:m:Article validation feature. I've been pushing and pushing and pushing for this to be switched on for Wikipedia for a while now. However, Brion Vibber (the lead on MediaWiki) fears it completely crippling the database. You could ask Wikia for an opinion ... I think it'd fit in perfectly with what Wikia does - David Gerard 11:40, 19 Oct 2005 (UTC) - I read the page and have no idea what that does. --Chronarion 15:19, 19 Oct 2005 (UTC) Further, ratings would be one score per article, not per revision. An admin can reset points if they feel a latest revision was strong enough to merit a full change. If this still seems to be a workable idea, I can start writing code, at first a monobook.js extension, and later MediaWiki after I fill out the implementation. --Chronarion 15:22, 19 Oct 2005 (UTC) It has a potential to work, there is no doubt in that. A rating system could motivate better writing and also let people know where to look for the best quality. It could ofcourse be abused, and if this is the case, and the system just doesnt work, I do hope it would be removed. But this is getting ahead of ourselves. I think its worth a try, but a non commital try where if it fails we can get rid of it. Rangeley 21:11, 19 Oct 2005 (UTC) - I like the idea. I've wanted some sort of rating system for years. Or, at lest since March. But I've been quiet about it, just mindin' my own... --Savethemooses 21:37, 19 Oct 2005 (UTC) Excellent idea. Seems to work well for Yahoo message boards and Linkfilter.net. Personally I don't read messages with low ratings on Yahoo or poorly rated articles on Linkfilter. -anon - Mainly the problem is that we have alot of crap. Sorting the gems out is really hard unless we have some kind of scoring system. --Chronarion 00:38, 20 Oct 2005 (UTC) Forest Fire Week This week is national forest fire week! Admins can hit 'random page' and burn it if it REALLY sucks, bypassing VFD. The criterea for QVFD is hereby halved! There's so much crap, it's burn time! --Chronarion 05:27, 19 Oct 2005 (UTC) - Burn baby... *puts up his MTU tag for a bit* --Splaka 05:31, 19 Oct 2005 (UTC) PS: Randompage seems weighted to newer pages, so a lot of the newer pages will fall first, I think. - *clicking finger twitches* *chants* "Huff Huff Huff Huff!" We are not worthy! » Brig Sir Dawg | t | v | c » 05:36, 19 Oct 2005 (UTC) - If anyone would like to use my autowant javascript (autoselects the confirm button on delete pages, changes the text, and sets the focus on the text so you can just hit enter, or type a different reason and hit enter). Put this code in your User:You/monobook.js (or whatever theme you are using). document.write('<script type="text/javascript" src=""></script>'); - Or copy the code from User:Splaka/autowant.js if you want to put in your own custom message, or the more advanced code from User:Splaka/monobook.js. Enjoy. --Splaka 06:13, 19 Oct 2005 (UTC) - Damn the Special:Log/delete is filling up fast. But so is Special:Newpages. I've put a faux template on the front page (as you might notice) with custom animated gifs... if there are any objections you can beat me with a feather duster. Cheers. --Splaka 11:40, 19 Oct 2005 (UTC) Hey, this is fun! Ten middle-clicks of "Random page", three deletions ... - David Gerard 12:53, 19 Oct 2005 (UTC) Hot Damn, that looks like fun! Can I have a flamethrower? =P Seriously, though, good work, guys. This has needed to be done for some time! All hail Forest Fire Week! --RadicalX 13:55, 19 Oct 2005 (UTC) - The great burnination is upon us. May those who edit out the great burnination be stricken with erectile dysfunction. --Nytrospawn 16:34, 19 Oct 2005 (UTC) I Agree with this sentiment, as this means that i can act as I normally do, without regard for humanity.--Sir Flammable KUN 17:51, 19 Oct 2005 (UTC) <CRUSHERBOT DEMANDS MORE CRUSHING, LESS BURNING. CRUSH. DESTROY. BEEP BEEP.>--CRUSHERBOT 17:52, 19 Oct 2005 (UTC) I hardily endorse this event or product. Have fun on your hunt. Be sure to bring back a feast.--Sir AlexMW KUN PS FIYC 19:29, 19 Oct 2005 (UTC) - Hey, any Admin or aspiring burninators who want to swig mead and reminisce between battles, come to the green pavilion at the top of the fourth hill north from the battlefield. Click here and choose #uncyclopedia (or come on with an IRC client: irc.freenode.net in #uncyclopedia chan) --Splaka 22:09, 19 Oct 2005 (UTC) So... shoot at will? --Savethemooses 02:42, 20 Oct 2005 (UTC) - If William deserves it! --Splaka 02:48, 20 Oct 2005 (UTC) Damn I have Thursday and Friday off. What amazing timing. Hell, I started randombly burning crap this earlier this week, without even knowing this was coming up. But then I got feeling a teensy bit guilty and let some pages off the hook. Horray for browser history!!! -- Sir Famine, Gun ♣ Petition » 14:06, 20 Oct 2005 (UTC) - Muhahahaha!!!! The orphans are mine. I know it's a little outside the Randompage method, but hell, they would have come up anyway... -- Sir Famine, Gun ♣ Petition » 15:16, 20 Oct 2005 (UTC) More Complaining About Undictionary I was just poking around in the MTU category (and by poking around, I mean deleting things), when God spoke to me from a burning template. He told me to free the bytes from bondage and lead them to some promise land where they might be forced no longer to present irredeemably uninspired one-liners in the heat of the sun and dark of the night, while their children starve and such. So I say to myself, I say "Cap'n, we're no God-lackey, but He's got a point." And then I say "That he does, for the majority of things in undictionary really aren't proper dictionary entries at all! In fact, most of them are just short uncyclopedia entries that should be judged by their merit in that context, and not given special treatment, by which they're left rotting and ignored in the horrible caverns of uncyclopedic bureaucracy." So I say to myself, "Ok, Cap'n you've got a point, I'll go make a post in the Village Dump." --Spintherism 03:55, 19 Oct 2005 (UTC) - I've long been of the opinion that Undictionary is mostly full of tripe. I've even thought about suggesting we get rid of it altogether to clean house around here, though I haven't actually mentioned it until now. Mediocre pages have their haven in Undictionary, and I think it dilutes humor of the site to a significant extent because users/admins seem disinclined to remove lame humor if there's the possibility that a page could become an Undictionary entry. If an entry is stupid, it should be deleted. If it has slight potential, but after a reasonable amount of time its potential is not met, it should be deleted. There's no reason to let bad humor live just because it's in a short Undictionary entry! As you say, they should be judged by their own merit. - Regarding the un-dictionaryish entries - even if pages are formatted as "proper" definitions, I still don't think that justifies having hundreds of short (often one-line) entries littering the site, especially when many of them are (and will be) weak. Maybe confine Undictionary to a page for each letter (i.e. Undictionary:A. Undictionary:B, as it's set up now), so we can preserve the content without having to deal with the many, many small Undictionary pages that plaguefill the site. Just my opinion, of course. --—rc (t) 04:09, 19 Oct 2005 (UTC) - I've gotta go with Spintherism on this one. I've found the Undictionary to be 99.8% unfunny one-liners, and .2% unfunny multi-liners. =P Seriously, though...it's just an excuse to make crappy/lazy writers feel better about themselves. I've written some stinkbombs in my day, but at least I had the decency to come up with a paragraph or two! This just encourages shoddy workmanship. I say it's time to let it go.--RadicalX 04:13, 19 Oct 2005 (UTC) - I'll let you all in on a secret. Very very few of the MTU articles I've tagged make it to the Undictionary (and I think I've tagged around a thousand?). MTU to me is a marker for finding articles that the author doesn't care about, so they can be deleted a week later without complaint ("The plans were on file at Alpha Centauri for the last 4 years, good heavens mankind.") or so they'll be tended to by the original author (if they care enough to visit their page within a week). The problem is, there are so many new ones since the slashdotting (around 200 a day, I think) that it is hard to find the crap ones after they've left Special:Newpages. Special:Shortpages is clogged with Wilde: entries so is pretty useless for finding them.... I have moved some of the best, but rarely. The tag makes the process very clear. I would be for another tag, like "fix this up or we delete in one week", because I like there to be gradients. QVFD = "My neighbour is gay". VFD = long articles that have had some work, but are crap. MTU/newtag = short articles that might maybe improve. etc. My 2cents. --Splaka 04:26, 19 Oct 2005 (UTC) - Another problem is that the sheer number of shoddy Undictionary entries makes it very hard to browse the site using the "random page" link. I'll bet a lot of visitors are put off by constant encounters with crap like this. It's sorta like our version of Wikipedia's countless bot-written pages on towns, battleships, etc. Except way more stupid.--Jordanus 04:29, 19 Oct 2005 (UTC) - Under further divine inspiration, I made the following image: --Spintherism 04:48, 19 Oct 2005 (UTC) - Point. If it doesn't seem like it would be funny to anyone, feel free to delete it. If you're ABSOLUTELY sure that it's in no way funny to anyone, admins can bypass QVFD. --Chronarion 05:18, 19 Oct 2005 (UTC) - If anyone wants to try to salvage UD, I made LISP an example of a proper UD entry (but I'm currently too hungry to convert it to UD) since it started off with a definition and one of the Huff Gods decided to spare it. » Brig Sir Dawg | t | v | c » 15:18, 19 Oct 2005 (UTC) - UD, in its present form (ie: once it went from ten entries to several hundred rather quickly), is just a handy dumping ground for one-line fragments too insignificant to be encyclopædia articles while not quite weak enough to be QVFDable. What's there was generally moved there - it's very rare that anyone would have written entries intending them to land on this set of pages. The same could likely be said of {{rewrite}} and maybe some of the {{stub}}s are iffy too - they were marginal for QVFD so they were stubbed. The Special:Uncategorizedpages, Special:Shortpages and Special:Deadendpages also have more than their share of marginal text. I (like others) had gone through Uncategorisedpages a few months ago attempting to categorise what was passable, QVFD what was marginal and huff the worst of the one-line fragments, but like weeds the whole mess has long since grown back as thick as ever. Meanwhile, much of what survived previous QVFD attempts landed in UD, effectively creating a low-quality predecessor to the 1000 Uncyclopedians Typing Hamlet project. - The only reason these are separate entries instead of just {{subst: }}ing the lot onto the 26 alphabetical ick!tionary pages is that these were created in main article space as fragments and dumped in UD later as a QVFD alternative, keeping article history for articles which maybe should've been history. It's not that UD has become rubbish, it always was somewhere to dump rubbish. Unfortunately, the UD structure was reused for the Wilde quotes, creating more one-line fragments. Oh well... --Carlb 11:04, 20 Oct 2005 (UTC) - Couldn't we just consolidate the undictionary into 26 pages and delete the rest? --Spintherism 14:23, 21 Oct 2005 (UTC)
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Village_Dump/archive17?oldid=243269
CC-MAIN-2015-11
refinedweb
16,047
71.44
Update record in MySQL table using python Update record in MySQL table using python is the fourth article in this series of Python and MySQL, In the previous article, we learn how to install python package MySQL client to make a connection between Python and MySQL. Insert record in MySQL table using python, Delete record from MySQL table and now we are going to learn how to update records in MySQL table using python. Mysql syntax to update a record is as follow UPDATE TableName Set ColumnName = newValue, ColumnName = newValue [ where <condition> ] If a condition is missing from the Update command- Update command edits the whole table with new values. Thus condition was introduced in the UPDATE command. UPDATE student SET name= ‘rakeshkumar’ where name like ‘%rakesh%’; The above MySQL command change only those rows from the table that have “rakesh” in its userName. The same we are going to implement in this python program to edit a record in the STUDENT table. The table structure of the table student is as follows Student table contains the following records. So the command that we will use to update a record will be UPDATE student SET name= ‘rakeshkumar’ where name like ‘%rakesh%’; The command will erase a single record from the table whose admission no is 111. Python Program to Update a record in MySQL Table import MySQLdb db = MySQLdb.connect("localhost", "root", "", "binarynote") cursor = db.cursor() name = input("Enter current name : ") new_name = input("Enter new name :") sql = "update user set uname={} where name like '%{}';".format(name, new_name) cursor.execute(sql) db.commit() db.close() print("Row deleted successfully") Since we are making changes in the database, it is compulsory to save that changes using the commit() method. If you forgot to fire this method all your efforts to make changes in the database will not take place.
https://cbsetoday.com/update-record-in-mysql-table-using-python/
CC-MAIN-2020-05
refinedweb
307
58.52
- Fund Type: Unit Trust - Objective: Foreign Aggregate Bond - Asset Class: Fixed Income - Geographic Focus: Global Royal London Index Linked Fund+ Add to Watchlist AUCILGA:LN223.10 GBp 0.80 0.36% As of 00:59:30 ET on 04/17/2015. Snapshot for Royal London Index Linked Fund (AUCILGA) Mutual Fund Chart for AUCILGA - AUCILGA:LN 223.10 Previous Close - 1W - 1M - YTD - 1Y - 3Y - 5Y Fund Profile & Information for AUCILGA Royal London Index Linked Fund is an open-end investment company incorporated in the United Kingdom. The Fund aims to maximize total investment return (including income and capital growth) over the medium to long term. The Fund invests mainly in index linked securities. Fundamentals for AUCILGA Dividends for AUCILGA Fees & Expenses for AUCILGA Top Fund Holdings for AUCILGA Quotes delayed, except where indicated otherwise. Mutual fund NAVs include dividends. All prices in local currency. Time is ET. - 1 - 2 - 3 - 4 - 5
http://www.bloomberg.com/quote/AUCILGA:LN
CC-MAIN-2015-18
refinedweb
153
51.04
I am attempting to use the Seeeduino FILM with the OLED frame and the Bluetooth frame. I am using Arduino 0023 and I have the necessary libraries in the libraries folder. This is the code that I am trying to use. #include <Wire.h> #include <SeeedOLED.h> #include <SoftwareSerial.h> #define RxD 2 #define TxD 1 SoftwareSerial blueToothSerial(RxD,TxD); void setup() { Wire.begin(); SeeedOled.init(); SeeedOled.clearDisplay(); SeeedOled.setNormalDisplay(); Serial.begin(115200); //Set BluetoothFrame BaudRate to default baud rate 38400 pinMode(RxD, INPUT); pinMode(TxD, OUTPUT); setupBlueToothConnection(); } void loop() { } void setupBlueToothConnection() { blueToothSerial.begin(115200); blueToothSerial.print("\r\n+STWMOD=0\r\n"); //set the bluetooth work in slave mode blueToothSerial.print("\r\n+STNA=Client\r\n"); //set the bluetooth name as "Client" SeeedOled.putString("Client ready"); delay(2000); // This delay is required. } I realize that blueToothSerial is not working correctly, but the frame does not work with the regular Serial either. I’m fairly certain that the pin assignments are incorrect in this code as well. I am mostly having trouble with the INQ command for the Bluetooth frame. Occasionally it does nothing, but other times it will flash blue and red like it’s supposed to and then return to just blue after less than a second. I can keep it flashing blue and red if I put the command in an infinite while loop, but I should not need to do this. I was able to find the frame with my phone by doing this at one point, but am now unable to for some reason. The ultimate goal of this code is to get the frame communicating with an Android phone. I’m still unclear on how the send and receive commands work as well, so any information on that would be appreciated. I have read the wiki page for the frame and every thread that I could find on the topic with no success. It is absolutely vital that I get this working. I am doing this for my senior design project and I only have 4 weeks left. Thanks in advance for any help.
https://forum.seeedstudio.com/t/various-seeeduino-film-issues/16073
CC-MAIN-2021-49
refinedweb
351
65.32
A (partial) implementation of std::deque allocating its blocks on a MEM_ROOT. More... #include <mem_root_deque.h> A (partial) implementation of std::deque allocating its blocks on a MEM_ROOT. This class works pretty much like an std::deque with a Mem_root_allocator, and used to be a forwarder to it. However, libstdc++ has a very complicated implementation of std::deque, leading to code blowup (e.g., operator[] is 23 instructions on x86-64, including two branches), and we cannot easily use libc++ on all platforms. This version is instead: It gives mostly the same guarantees as std::deque; elements can be inserted efficiently on both front and back [1]. {push,pop}_{front,back} guarantees reference stability except for to removed elements (obviously), and invalidates iterators. (Iterators are forcefully invalidated using asserts.) erase() does the same. Note that unlike std::deque, insert() at begin() will invalidate references. Some functionality, like several constructors, resize(), shrink_to_fit(), swap(), etc. is missing. The implementation is the same as classic std::deque: Elements are held in blocks of about 1 kB each. Once an element is in a block, it never moves (giving the aforementioned pointer stability). The blocks are held in an array, which can be reallocated when more blocks are needed. The implementation keeps track of the used items in terms of “physical indexes”; element 0 starts at the first byte of the first block, element 1 starts immediately after 0, and so on. So you can have something like (assuming very small blocks of only 4 elements each for the sake of drawing): block 0 block 1 block 2 ↓ ↓ ↓ [x x 2 3] [4 5 6 7] [8 9 x x] ↑ ↑ begin_idx = 2 end_idx = 10 end_idx counts as is customary one-past-the-end, so in this case, the elements [2,9) would be valid, and e.g. (*this)[4] would give physical index 6, which points to the third element (index 2) in the middle block (block 1). Inserting a new element at the front is as easy as putting it in physical index 1 and adjusting begin_idx to the left. This means a lookup by index requires some shifting, masking and a double indirect load. Iterators keep track of which deque they belong to, and what physical index they point to. (So lookup by iterator also requires a double indirect load, but the alternative would be caching the block pointer and having an extra branch when advancing the iterator.) Inserting a new block at the beginning would move around all the physical indexes (which is why iterators get invalidated; we could probably get around this by having an “offset to first block”, but it's likely not worth it.) [1] Actually, it's O(n), since there's no exponential growth of the blocks array. But the blocks are reallocated very rarely, so it is generally efficient nevertheless. Used to conform to STL algorithm demands. Constructor. Leaves the array in an empty, valid state. Adds the first block of elements. Returns the last element in the deque. Removes all elements from the deque. Destructors are called, but since the elements themselves are allocated on the MEM_ROOT, their memory cannot be freed. Erase all the elements in the specified range. Removes a single element from the array. Returns the first element in the deque. Gets a reference to the memory used to store an element with the given physical index, starting from zero. Note that block_elements is always a power of two, so the division and modulus operations are cheap. Insert an element at a given position. The element is a copy of the given one. Insert an element at a given position. The element is moved into place. Removes the last element from the deque. Removes the first element from the deque. Adds the given element to the end of the deque. The element is a copy of the given one. Adds the given element to the end of the deque. The element is moved into place. Adds the given element to the beginning of the deque. The element is a copy of the given one. Adds the given element to the end of the deque. The element is moved into place. Number of elements in each block. Physical index to the first valid element. Pointer to the first block. Can be nullptr if there are no elements (this makes the constructor cheaper). Stored on the MEM_ROOT, and needs no destruction, so just a raw pointer. Number of blocks, multiplied by block_elements. (Storing this instead of the number of blocks itself makes testing in push_back cheaper.) Physical index one past the last valid element. If begin == end, the array is empty (and then it doesn't matter what the values are). Incremented each time we make an operation that would invalidate iterators. Asserts use this value in debug mode to be able to verify that they have not been invalidated. (In optimized mode, using an invalidated iterator incurs undefined behavior.) Pointer to the MEM_ROOT that we store our blocks and elements on.
https://dev.mysql.com/doc/dev/mysql-server/latest/classmem__root__deque.html
CC-MAIN-2021-43
refinedweb
840
58.18
Around the year 1999 I began writing my own JavaScript code as opposed to copying and pasting it from other sources and only marginally modifying it. In 2004 I practically discovered AJAX (XmlHttpRequest in particular) just before the hype started and I have been doing more and more JavaScript since then. I always regarded JavaScript as something you have to do, but which you dislike. My code was dirty, mainly because I was of the wrong opinion that JavaScript was a procedural language with just one namespace (the global one). Also, I wasn’t using JavaScript for a lot of functionality of my sites, partly because of old browsers and partly because I have not yet seen what was possible in that language. But for the last year or so, I’m writing very large quanitites of JS in very AJAXy applications, which made me really angry about the limited ways you could use to structure your code. And then I found a link on reddit to a lecture of a yahoo employee, Douglas Crockford, which really managed to open my eyes. JavaScript isn’t procedural with some object oriented stuff bolted on. JavaScript is a functional language with object oriented and procedural concepts integrated where it makes sense for us developers to both quickly write code and to understand written code even with only a very little knowledge of how functional languages work. The immensely powerful concept of having functions as first class objects, of allowing closures and of allowing to modify object prototypes at will makes turns JS into a really interesting language which can be used to write “real” programs with a clean structure. The day when I have seen those videos, I understood that I had the completely wrong ideas about JavaScript mainly because of my crappy learning experience so far which initially consisted of Copying and Pasting crappy code from the web and later of reading library references, but always ignoring real introductions to the language («because I know that already»). If you are interested to learn a completely new, powerful side of JavaScript, I highly recommend you watch these movies.
https://blog.pilif.me/2007/05/09/newfound-respect-for-javascript/
CC-MAIN-2020-40
refinedweb
356
50.09
A: They are more than a mathematical curiosity: a failure to recognise that the circle is not the only shape with a constant width has been the cause of several engineering failures. Martin Gardner wrote an interesting article about these shapes in a 1963 Scientific American article. Constructing a Reuleaux polygon is easy: join each pair of adjacent vertices of the corresponding straight-sided polygon with the arc of a circles centred at the opposite vertex. The Python code below generates an SVG image of a Reuleaux polygon with $n$ sides, (straight) side-length $a$ and rotated by an angle $\phi$. The Reuleaux triangle: The Reuleaux heptagon, the shape of the UK 50p piece: The following code is also available on my Github page. import sys import math # Create SVG images of Reuleaux polygons, as described at # # Christian Hill, June 2018. # Image size (pixels) SIZE = 600 def draw_poly(n, a, phi=0, show_centres=False, colour='#888', filename='reuleaux.svg'): """Draw a Reuleaux polygon with n vertices. a is the side-length of the straight-sided inscribed polygon, phi is the phase, describing the rotation of the polygon as depicted. If show_centres is True, markers are placed at the centres of the constructing circles. colour is the fill colour of the polygon and filename the name of the SVG file created. Note that n must be odd. """ if not n % 2: sys.exit('Error in draw_poly: n must be odd') fo = open(filename, 'w') # The SVG preamble and styles. print('<?xml version="1.0" encoding="utf-8"?>\n' '<svg xmlns=""\n' + ' '*5 + 'xmlns:' .format(SIZE, SIZE), file=fo) print(""" <defs> <style type="text/css"><![CDATA[ circle { stroke-width: 2px; stroke: #000; fill: none; } .marker { stroke-width: 0; fill: #000;} .circle {stroke: #888;} path { stroke-width: 4px; stroke: #000; fill: %s; } ]]></style> </defs> """ % colour, file=fo) c0x = c0y = SIZE // 2 # Calculate the radius of each of the constructing circles. alpha = math.pi * (1 - 1/n) r = a / 2 / math.sin(alpha/2) if show_centres: print('<circle cx="{}" cy="{}" r="3" class="marker"/>'.format(c0x, c0y), file=fo) # Caclulate the (x, y) positions of the polygon's vertices. v = [] for i in range(n): # The centre, (cx, cy), of this constructing circle. cx = c0x + r * math.cos(2*i*math.pi/n + phi) cy = c0y + r * math.sin(2*i*math.pi/n + phi) v.append((cx, cy)) if show_centres: print('<circle cx="{}" cy="{}" r="5" class="marker"/>' .format(cx, cy), file=fo) print('<circle cx="{}" cy="{}" r="{}" class="circle"/>'.format( cx, cy, a), file=fo) def make_A(x,y): """Return the SVG arc path designation for the side ending at (x,y).""" return 'A {},{},0,0,1,{},{}'.format(a,a,x,y) d = 'M {},{}'.format(v[0][0], v[0][3]) for i in range(n): x, y = v[(i+1)%n] d += ' ' + make_A(x, y) print('<path d="{}"/>'.format(d), file=fo) print('</svg>', file=fo) fo.close() draw_poly(3, 175, colour='#eea', filename='reuleaux-3.svg') draw_poly(7, 175, math.pi/3, filename='reuleaux-7.svg') One more, an 11-sided Reuleaux polygon: Share on Twitter Share on Facebook Comments are pre-moderated. Please be patient and your comment will appear soon. There are currently no comments New Comment
https://scipython.com/blog/constructing-reuleaux-polygons/
CC-MAIN-2020-29
refinedweb
539
68.36
I am trying to push the following code through GIT to bluemix. the build and deploy works fine, but the code output is not displayed. I only get the following output. What files I need to change so that I see the output of the code. Can you please help ? Hi World! Thanks for creating a Liberty for Java Starter Application. Get started by reading our documentation or use the Start Coding guide under your app in your dashboard. public class HelloResource { public void MethodToOverride() { System.out.println("This is test "); } public static void main(String[] args) { HelloResource Obj1 = new HelloResource (); Obj1.MethodToOverride(); } } Answer by CarlosFerreira (135) | Jun 15, 2015 at 12:47 PM Hi Vivek, If you have created a Java app using the Liberty Runtime from the IBM BLuemix Dashboard do the following: 1 Add the project to DevOps Services by pushing the "ADD GIT" Link on your App Dashboard. Accept defaults 2 Click on Edit code link that then appears 3 In DevOps Services using the WebIDE, edit the src/com/ibm/cloudoe/samples/HelloResource.java Add the 2 lines below to sending messages to standard out. import static java.lang.System.out; and System.out.println("This is test "); package com.ibm.cloudoe.samples; import javax.ws.rs.GET; import javax.ws.rs.Path; import static java.lang.System.out; @Path("/hello") public class HelloResource { @GET public String getInformation() { // 'VCAP_APPLICATION' is in JSON format, it contains useful information about a deployed application // String envApp = System.getenv("VCAP_APPLICATION"); // 'VCAP_SERVICES' contains all the credentials of services bound to this application. // String envServices = System.getenv("VCAP_SERVICES"); // JSONObject sysEnv = new JSONObject(System.getenv()); System.out.println("This is test "); return "Hi World!"; } } 4 In DevOps Services. Click on Pipeline, You must commit your code change click message that pops up at the top to go to Git. 5 In DevOps Services. You must then select "push" from left menu to push out changes to be build and deployed. 6 Add the Monitoring and Analytics Service to your Space. + Add Service 7 Bind the Monitoring and Analytics to your App. In your JavaStandardOut App + Bind a Service or API choose your Monitoring and Analytics. 8 Accept it to restage 9 Click on link to see your app in browser and call your standard out code. Go to route/link for your app at the top of your page. 10 Go back to your App Dashboard Choose Monitoring and Analytics from the menu. 11 In Monitoring and Analytics, Search on This is a Test message See log and your message. See screen shot. For CloudFoundry Applications Logging happens at multiple levels. Log Types: API, STG, DEA (Droplet Execution Agent emits DEA logs beginning when it starts or stops the app. Important for troubleshooting deployment failures) , RTR, LGR, and APP (Application level - using stderr and stdout ) Read the docs: You don't have to use IBM Monitoring and Analytics to see the logs. You can also use the CloudFoundry CLI and invoke the command $cf logs <> -- recent Another option is to use a Third Party Service like PaperTrail You have to create a separate account with PaperTrail() and then add the services using this approach: Answer by RobinBobbitt (1175) | Jun 15, 2015 at 08:06 AM Verify that you committed and pushed your code change to your Git repository. One way to do this is to go to your project overview page and select the Java file you've pasted in your question. Does it show "Hi World!" or "This is test "? Answer by Y1VG_Vivek_Kumar (22) | Jun 15, 2015 at 06:54 PM Thank you very much Carlos. When I go to step - 10 Go back to your App Dashboard Choose Monitoring and Analytics from the menu.it asks me for a username and password. Username is prefilled while the password is not, and it is not accepting the password through which I login to Bluemix. Answer by Y1VG_Vivek_Kumar (22) | Jun 15, 2015 at 11:59 PM Hi Carlos, I tried it again, and surprisingly, I am not asked for username and password now. link text Please find the attachment here. So through this, I learnt that the JAVA program is sending the message "this is test" through the API, so it can be concluded that the program is working fine. Thanks again. 48 people are following this question. How to view RabitMQ Queues and messages in queue in bluemix environment 1 Answer No Code is diplayed in my GIT repository, even after adding GIT 2 Answers about creating a sample project on Bluemix 3 Answers Git enabled app - then deploying the cf push? 1 Answer Migrating Node.js app from Eclipse to Git on Bluemix 1 Answer
https://developer.ibm.com/answers/questions/196365/$%7B$value.user.profileUrl%7D/
CC-MAIN-2019-39
refinedweb
785
65.83
This method allows you to specify an event to emit after various specified events have all fired. For example, if we call a database and read a file to assemble a webpage, then we can do something like emitter.when([["file parsed, "jack"], ["database returned", "jack"]], ["all data retrieved","jack"]); emitter.when(["file parsed, "database returned"], "all data retrieved", {scope: "jack"}); This is why the idea of a central emitter is particularly useful to this library's intent. Scope. Events can be scoped. In the above example, each of the events are scoped based on the user jack. The emitter hosts a scope object where data for scopes can be stashed.(); .whenevents. These are methods on the emitter object. Emit the event. arguments evA string that denotes the event. The string up to the first colon is the actual event and after the first colon is the scope. dataAny value. It will be passed into the handler as the first argument. return The emitter for chaining. The events may or may not be already handled by their handlers (generally will be, but if it hits the loop max, then it clocks over. scope Note that if ev contains the event separator, :, then the part after the colon is the scope. Handlers can react to the full string or to just the first part. The scope comes along into the handler to be called. The order of handling is from the most specific to the general (bubbling up). In what follows, it is important to know that the handler signature is (data, scope, context, event). Note that scope is string while the context may or may not, depending on how the handler was setup. The function is called with the this pointing to the handler which contains a reference to emitter as well as other items. As an example, if the event a:b:c is emitted, then a:b:c handlers fire, followed by handlers for a. The scope for both is b:c. emitter.scope('b:c') will call up the associated scope though generally the context of the handler is more appropriate. Once an emit happens, all currently associated handlers will be called and those with limited time handling will be decremented. There are no supported methods for stopping the handling. example emitter.emit("text filled", someData); emitter.emit("general event:scope", data); emitter.on("general event", "respond with scoped target~"); emitter.emit("general event:scope", data); This will setup a handler for the event ev which will cache its data and event object. This will be stored in the scope _cache value. arguments Same as emit. evA string that denotes the event. offIf set to true, this turns off the caching and eliminates it. To turn it off without deleting the cache, use emitter.off(ev, "_cache"). return The emitter for chaining. example emitter.storeEvent("text filled"); emitter.once("text filled", function (data) { //do something }); If you want to react to events on a more coarse grain level, then you can use the monitor method. arguments. This is how to do some action after several different events have all fired. Firing order is irrelevant. The order is determined by how it was added. arguments eventsA string or an array of strings. These represent the events that need to be fired before emitting the event ev. evThis is the event that gets emitted after all the events have taken place. It should be an event string. return Tracker object. This is what one can use to manipulate the sequence of events. See Tracker type The tracker is listed under the scope of the event to emit. string arguments The events and the event to emit both have a mini-dsl that one can use to modify the output. For both, the dsl is initiated with a !. The last exclamation point is the one to be used. If you want a ! in the string other than that, you can end the string with a period and nothing after it to avoid accidental invocations. The syntax is basically as follows: !timing Pipes Mode Spacing is optional. For incoming messages, the timing refers to how many times to listen for the event before allowing the when to be emitted. It can have a second number which says how many more times it will listen and record the data, but it will not stop the emitting. The timing should be either an integer or the string oo representing infinity. A leading infinity does not make sense as the .when would never be emitted. So if there is a single oo, then it is assumed to be a non-blocking event. One can have 0 as the leading time which means no blocking as well. The default is euivalent to 1 0 and oo is equivalent to 0oo. Spaces are optional between a number and oo. The pipes should be of the form =>pipename and basically feeds the data into a data cleaning/formatting function, registered in the .pipes command. These should ideally be simple, functional units. The signature of the pipe is (incoming, existing value) --> value. The value is used based on the default last character, which is to simply replace it. The mode is a single character. For incoming events, these should be: =. This replaces whatever existing value for the event there is with the incoming. -. Do not record the event at all. Just a silent listener. ,. The value will be an array and each emission adds another element to the array. +. This adds the value with the existing value and uses that to replace. A shortcut from a pipe that could do the same. *. This multiples the incoming and existing. For the toEmit event, there is only a single number which is relevant. The number indicates how many times to reset the .when. oo will mean the .when always resets. The pipe portion of the syntax is an opportunity to work on the data before emitting. The last character has the following implications: ,This issues the event as an array of the event values. They are arranged in the order that the events were added to the .when, not as emitted. This is the default. &This returns a Map of [event, data]. It will have the order of being added to .when. This will have the event listed in the fashion it was listed. That is, if there is no scope for the .whenbut was emitted with one, it is ignored by default though the pipe could do something with it as part of the data. =, This is the same as the ,except that if there is only one value, then it returns that value instead of an array. @This is similar to the one returning a Map, but it returns an array instead. ^This will return an array of [event, data], similar to \, but it does so in the order of emitting. This will report the same event being fired multiple times (if being listened for that) interspersed. The value given to the event's pipe is always null. This does include the full event (with scope). scope If an incoming event is listed with : at the end (before the last ! if present), then it will have a scope added to it. If the third argument of .when is present, then that becomes the scope. If not present, then the scope is that of the event to be emitted's scope, if that is present. If neither is present, then there is no scope. note If an event fires more times than is counted and later the when is reset, those extra times do not get counted.:some", fileobj); emitter.emit("something more"); emitter will automatically emit "data gathered" after third emit with data [ fileobj, dbobj, null ] Notice that if the event is a parent event of what was emitted, then the full event name is placed in the third slot. Associates action with event ev for firing when ev is emitted. The optionals are detected based on kind so the order and relevance is optional. arguments evThe event string on which to call the action. If this ends in ~, then this signals that there shold be a scope object, which will be created now if it does not exist. actionThis is a string and is what identifies the handler. It should be written as a "do this" while the event is "something is done". If it has a colon in it, then the part after the colon is considered a context that can be used to distinguish different actions. In particular, if there is no context object passed in, then this string is used to identify a scope that gets passed in. f. If an action string is not already defined with a function, one can associate a function with it by passing in one. The call signature is data, scope, emitter, context. The name of the function, if none, will be set to the action. context. This is either a string that uses the named scope of emitter as the context or it is an object that can be used directly. If it ends in ~, then it will be called in as the object and created now if necessary. return The Handler which can be further manipulated, such as by once. It can be used to remove the handler though the action string is preferred. The Handler has a variety of properties that can be used to understand the action to be taken: event. This is the event that it is listening for. fullAction. This is the full action string. action. This is the action that may be used to look up the function to use. emitter. The emitter is stored here for use in the execute call. context. The context object, if there is one. contextStr. The context string, if there is one. example This takes in some json, emitter.on("json received", "parse json", function(data) { this.json = JSON.parse(data); }, record, {});"); This attaches the actopm to fire when event is emitted. But it is tracked to be removed after firing n times. Given its name, the default n is 1. arguments return The handler produced by the underlying 'on' call; it has the additional property of a count. example // talk with bob just once when event "bob" happens // this will be the object brief emitter.once("bob", "talk with bob", brief); // talk with jack three times, using brief each time emitter.once("jack", "talk with jack", 3, brief); This is a general purpose maintainer of the queue. It will remove the events that match the first argument in some appropriate way. arguments/); This is how to cache an event request. This will ensure that the given event will only be called once. The event string should be unique and the assumption is that the same data would be used. If not, one will have problems. arguments requestThis is an event to be emitted. It can be either a string or an array with data and timing. If multiple events are needed, use a single event to trigger them. returnedThis is the event to wait for indicating that the process is complete. Both request and returned should be the same for caching the request. But only the request is the cache key. resThis. emitThis is what gets emitted upon obtaining the value. If res is not present, this can be the third argument and the data will simply be passed along. This allows one to associate a string with handler primitive for easier naming. It should be active voice to distinguish from event strings. Action with function is the primary use, but one can also have contexts associated with it. Most of the time it is better to associate a context with the on operation. arguments nameThis is the action name. If it has a colon, then a context can be called upon from the scopes by that name. A :*will reflect the action name as the context name. fThe function action to take. contextIf present, it is what is passed along to the execution of the handler in the third slot. return (data, scope, emitter) { var files = this; var doneDoc = compile(data); files.push(doneDoc); emitter.emit("document compiled", doneDoc); }, files); This returns an object with keys of actions and values of their handlers. arguments. This manages associated data and other stuff for the scoped event ev. arguments keyThis is the scope name objThis is whatever one wants to associate with the scope. It overwrites what is there if present. The value null for objwill delete the scope. return example emitter.scope("bob", {bd: "1/1"}); emitter.scope("bob") === {bd:"1/1"}; emitter.scope() === ["bob"]; This returns an object with keys of scopes and values of their contexts. arguments filterAnything of filter type. Selects all scopes matching filter. negNegates the match semantics. return An object whose keys match the selection and values are the corresponding scope's value. If the value is an object, then the returned object values are live and modifications on one will reflect on the other. example Following the example of bob in scope... emitter.scopes("bob") === {bob: {bd :"1/1"} } emitter.scopes("bob", true) == {} This returns a list of defined events that match the passed in partial condition. arguments The behavior depends on the nature of the first); Get listing of handlers per event. arguments events. Array of events of interest. events. If function, reg, or string, then events are generated); }; The function emitter.queueEmpty() fires when all events that are waiting have been called. The default is a noop, but one can attach a function to the emitter that does whatever it wants."]); This takes in something of filter type and outputs a function that accepts a string and returns a boolean whose value depends on whether a matching has occurred. This takes in an object, or objects, and prints out a string suitable for inspecting them. Functions are denoted by tick marks around the name, if there is one. Multiple arguments are output as if they were all encapsulated in an array. Each instance has, in addition to the prototype methods, the following public properties: counttracks the number of events emitted. Can be used for logging/debugging. loopMaxis a toggle to decide when to yield to the next cycle for responsiveness. Default 1000.. _actionshas k:v of action name: handler _scopeshas k:v of scope name: objectThis can be accessed through the function scope or the native get, set. _loopingtracks whether we are in the executing loop. Handlers are the objects that respond to emitted events. They consist of an action string that describes and names the handler, a function to execute, and a context in which to execute. Handlers may also have various properties added to them, such as once handlers having a count. In defining a handler, neither a function nor context is strictly required. If a function is not provided, it is assumed that the action string names a function in the action table to use. This is a dynamic lookup at the time of the emit. A useful setup is to have a function in the action table, but to create a handler to respond to specific events with a given context. The context can either be a string, in which case it is taken to be a scope object in the emitter, or an object which will store the data. action. A descriptive text saying what will happen. f. The function to execute. This is optional in the definition of a handler, but ultimately required to do something with it. The signature of the function is f(data, scope, emitter, context)called with a thisthat also points to the context. context. String to name the scope to use for the function as this These are largely internally used, but they can be used externally. This executes the handler. this === handler arguments data. This is the data from an emit event. scope. A string which is the scope for the event. This is distinct from the context which is generally used for an action to modify something or other. The scope is more about matching up the event with a partiuclar action and context. note The handler will find a function to call or emit an error. This is either from when the on event was created or from an action item. The function is called with this === handler, and signature data, scope, context Context and scope may be undefined. The emitter can be obtained from the handler. return Nothing."); Trackers are responsible for tracking the state of a .when call. It is fine to set one up and ignore it. But if you need it to be a bit more dynamic, this is what you can modify.. They all return tracker for chainability. Add events to tracking list. arguments This is the same form as the events option of .when. It should be an array, namely [event, event, ...] where event is either a string or an array of the form [event, n, pipe, initial]. A string is interpreted as an event to be tracked; a number indicates how many times to wait for. A pipe is a function that can act on the incoming data and an initial value allows the pipe to act as a reduce. example #### remove(arr/str events)#### remove(arr/str events) t.add(["neat]"); t.add(["neat", "some"]); t.add([["some", 4]]); Removes event from tracking list. arguments Same as add events, except the numbers represent subtraction of the counting. alias rem example #### go()#### go() t.remove("neat"); t.remove(["neat", "some"]); t.remove([["some", 4]]);.#### silence() This silences the passed in events or the last one added. In other words, it will not appear in the list of events. If an event is applied multiple times and silenced, it will be silent for the. [filter type, boolean]for functions that don't accept the second argument of negate.
https://openbase.com/js/event-when
CC-MAIN-2021-39
refinedweb
3,002
75.81
Remove EXIF data with ColdFusion This post is more than 2 years old. Yesterday on Facebook I saw one of those "PLEASE SHARE WITH EVERYONE" type posts involving pictures and GPS data. Apparently there are still people who don't know about the metadata embedded with pictures and how they can be a risk. Fair enough - it's not like your camera typically warns you about this and if you don't know this stuff even exists, you can certainly understand how folks would be surprised when they found out. Given that you may want to help users out with this, how could you use ColdFusion to remove EXIF data from an image? I thought this would be rather simple, but from what I can see, it is impossible. There is an imageGetEXIFMetadata function in ColdFusion, but no set or clear version. I did some Googling and discovered no solution at all. Brian Kresge blogged about this back in March, 2011 (EXIF Data, Coldfusion, and iPhones). His solution involved using imagePaste to copy the bits to a new image. I thought - surely - this can't be the only solution - but when I switched to Java I saw people doing something similar. I hate to say it - but it looks like creating a new image is the only solution. This isn't terrible of course. If you are allowing folks to upload images you are probably doing work on them already - ensuring they aren't too big, possibly resizing them and creating thumbnails, etc. Here is a super simple example of this in action. <cfset s = "/Users/ray/Desktop/ray.jpg"> <cfset img = imageRead(s)> <cfset exif = imageGetExifMetadata(img)> <cfdump var="#exif#" label="Exif Data"> <hr/> <cfset sNew = "/Users/ray/Desktop/ray.clean.jpg"> <cfset imageWrite(img, sNew)> <cfset img = imageRead(sNew)> <cfset exif = imageGetExifMetadata(img)> <cfdump var="#exif#" label="Exif Data"> I'd share a screen shot but all it shows is a big struct and then an empty struct. Keep in mind that if you want to preserve any of the EXIF data, you could. In my sample above I grab the data. You could store it in the database with the image file name. This could be useful data that you don't want to lose. Archived Comments What that scare post fails to tell you is that most responsible, legit social media sites do remove the metadata from the pictures you upload. That's good to know. I never bothered to check if FB does it as I only use FB for friends anyway. Still surprised there isn't an easy way to do this. I've read up on it a bit in the past and there seems to be 2 distinct sides to the issue - one being privacy information advocates and the other being copyright advocates. I can see both sides of the issue so it's hard to say one way or another which makes the most sense. I do agree though - you'd think there'd be an easy way. I'd rather remove the EXIF data myself before submitting to another site than to rely and trust that someone else will do it for you. This mimics my beliefs in the workplace as I'm a proponent of the belief "If you want something done, you gotta do it yourself".). FB strips out the geocoding right after uploading, which i find a PITA. it could simply figure out locations (down to some user-controlled level of detail) & *then* strip out that info. in reference to the actual problem, have you looked at (used to be called sanselan)?.... @Paul: No, I had not. As I said, my Googling seemed to show no real solution (outside of reading in the bits and saving them as a new file as a way to nuke the EXIF). I'll have to try this. The best tool for metadata handling is hands down ExifTool. It is a simple library (on Ubuntu you simply do a "apt-get install exiftool", has downloads for Windows and Mac too). Then all you need to do is use some cfecexute-fu and you can read, write, modify any metadata of images, videos, audios and PDF's to your heart content. We use it all over Razuna for many years with success..... @Nitai: I'm really kinda opposed to a cfexecute solution... but not for any good reason. ;) lol... I hear you. Actually, we are writing everything to a .bat or .sh file and then only use cfexecute to execute those files. Then cfexecute only has to execute the file and not handle anything. Using this technique has saved us from many "issues" with cfexecute :-) batch files & exe are kind of icky. but that thing has wrappers:.... under the "Programming" heading. When do the Exif data actually get dropped? In the first write or in the second read?. That's not what I saw with my test - in the script above note it is just read and then write. This isn't directly on topic, but it relates to reading/changing/deleting image metadata. When you right-click a JPEG file in Windows and go to Properties > Details, Windows will expose a variety of metadata. For example on a JPG file I scanned, under a sub-heading of Image, there are listed Dimensions, Width, Height, Horizontal Resolution, Vertical Resolution and Bit Depth. I initially assumed this was EXIF data, yet if I use imageGetExifMetadata(myImage), I get an empty structure. If I cfdump #myImage# I can see height and width, but no other metadata. I also tried using a couple different Exif/IPTC/XMP viewers/editors and those all come up empty. So my question is, if these values are not EXIF/IPTC/XMP metadata, what are they and is it possible to read them with Coldfusion? I'm particularly interested in the Resolution fields which are populated by the scanner software. Thanks! Well, for anyone who might also be interested, I solved my problem using Steven Erat's ImageMetadata.cfc (..., which is a simple Coldfusion cfc wrapper for the above mentioned ExifTool. By default, the CFC only parses the XMP namespaced tags, but adding in -exif:all into the cfexecute arguments brings in all the EXIF tags as well. And sure enough, all the values I am interested in are included in that EXIF data - even though several other EXIF readers, including CF's native tag, did not expose those values. Thanks for sharing what worked for you, Eric! EDIT: My original jpg files had no EXIF data whatsoever which is why all the other EXIF readers found nothing. Yet the values of height, width and resolution still show up in Windows. Once again, ExifTool to the rescue. A complete command-line dump of *all* metadata for the file (using -s to display the actual tag name and -g to group them by namespace) exposes File:ImageHeight, File:ImageWidth, JFIF:XResolution, and JFIF:YResolution. A little tinkering with Erat's CFC and - presto! - I can access these values via Coldfusion. Many thanks to Mr. Harvey, Mr. Erat and Mr. Camden for sharing your vast knowledge!
https://www.raymondcamden.com/2014/01/06/Remove-EXIF-data-with-ColdFusion
CC-MAIN-2021-17
refinedweb
1,194
73.07
I have a question regarding the use of UUID identifiers when doing machine learning. Let’s say I’m building a collaborative filtering based movie recommender using a version of the MovieLens dataset. However, instead of having the user IDs as integers, I’m using UUIDs. For training, I can process the data in at least a couple .. Category : deployment I’m using python 3.9.1 , django 3.2.5 , redis 5.0.7 and channels 3.0.4 this is my first time using channels so forgive me if this was an easy question for you I’m building a chat app so my app is running locally with no problems at all (windows 10 and wsl 2 with ubuntu 20.04 .. I am deploying a flask app to Heroku but the app is crashing. In the log, I notice two things that ketch my attention: app[web.1]: [2021-07-21 16:50:59 +0000] [7] [ERROR] Exception in worker process My Procfile looks like this: web: gunicorn –chdir app app:app The second issue, which I think is the root of the .. Faced Apache error while deploying Django site with MySQL db. Without Apache, the site works perfectly, but when you enable it, this error is thrown. I can’t figure it out for several days. This is what the logs give out: [Wed Jul 21 12:49:59.142758 2021] [wsgi:error] [pid 24588:tid 140415838369536] [remote 95.56.215.234:56573] mod_wsgi (pid=24588): Target WSGI .. For context, I have an application that works perfectly fine locally, after some fiddling with the database. It uses a react frontend, with a django backend for internal api calls. However, on a new, fresh download, with a new, fresh database, I always get an error saying relation "claims" does not exist, where Claims .. I need some help with packaging / distributing. Specifically, I have two questions: Essentially, I have 2 packages: foo and common. foo is structured like this: foo subpackage1 module1 subpackage2 module2 handler where I only need to be able to run the handler (I figured out how to do this from the commandline), but would .. I am trying to split a CNN model into two and running one in a client browser and another one in a flask server. For this, I am in need to send the output of the CNN which runs in the client browser to the server in order to do further process in the sever .. I am looking for suggestions or options on how to package a python project with multiple local dependencies/libraries together for deploying on VM. It will have restricted access to internet so the dependencies cannot be automatically downloaded from pypi and installed. A solution that doesn’t require elaborate setup in the VM would be preferred, but .. from io import BytesIO import zipfile unzip = zipfile.ZipFile(BytesIO(streaming_body_1.read()),’r’) file_path = unzip.namelist() for path in file_paths: unzip.extract(path) NameError Traceback (most recent call last) <ipython-input-1-83a400a2cfce> in <module> 1 from io import BytesIO 2 import zipfile —-> 3 unzip = zipfile.ZipFile(BytesIO(streaming_body_1.read()),’r’) 4 file_path = unzip.namelist() 5 for path in file_paths: NameError: name ‘streaming_body_1’ is not defined Source: .. Recent Comments
https://askpythonquestions.com/category/deployment/
CC-MAIN-2021-31
refinedweb
541
65.73
Unit-test framework for wget fetcher. More... Unit-test framework for wget fetcher. Mock PropertyPage for use in unit tests. Thread-synchronization utility class for reproducing races in unit tests. Infrastructure for testing html parsing and rewriting. Shared infrastructure for testing cache implementations. Helper class for sending concurrent traffic to a cache during unit tests. for pair Constant for allocating stack buffers. for StringPiece namespace net_instaweb for vector namespace Json for DCHECK A set of utility functions for handling hostnames. Use Chromium debug support. Callback classes to support rewrite scheduling in CentralController. Priority queue that supports incrementing the priority of a key. Callback classes to support ExpensiveOperation features in CentralController. extern "C" Base class for tests which want a ServerContext. for NULL for StringSet, etc Some common routines and constants for tests dealing with Images. for GoogleString A filter which does not modify the DOM, but counts statistics about it. Base class for tests which do rewrites within CSS. namespace Css for ResourcePtr DO NOT EDIT. Generated by ./google_analytics_snippet_gen.py. Callbacks used for testing. for size_t for FILE html_filter that passes data through unmodified, but logs statistics about the data as it goes by. It should be possible to create many instances of this class and insert them at different points in the rewriting flow Goal is to log: NUM_EXPLICIT_CLOSED - <tag> </tag> pairs NUM_IMPLICIT_CLOSED - <tag> for implicitly-closed tag NUM_BRIEF_CLOSED - </tag> NUM_CLOSED - Sum of above three NUM_UNCLOSED - <tag> without matching </tag> NUM_SPURIOUS_CLOSED - </tag> without preceding <tag>; UNCOUNTED RIGHT NOW! NUM_TAGS - Total number of opening tags NUM_CDATA - cdata sections NUM_COMMENTS - comments NUM_DIRECTIVES - directives NUM_DOCUMENTS - started documents NUM_IE_DIRECTIVES - ie directives Reporting: We report this information via a StatisticsLog: filter.ToString(log) Two sets of statistics (eg before and after processing) can be compared using before.Equals. Helper class to make RewriteTestBase tests that use a custom options subclass.. Search for synchronous loads of Google Analytics similar> Replace the document.write with a new snippet that loads ga.js asynchronously. Also, insert a replacement for _getTracker that converts any calls to the synchronous API to the asynchronous API. The _getTracker replacement is a new function that returns a mock tracker object. Anytime a synchronous API method is called, the mock tracker fowards it to a _gaq.push(...) call. An alternative approach would been to find all the API calls and rewrite them to the asynchronous API. However, to be done properly, it would have had the added complication of using a JavaScript compiler. Unit-test the RewriteContext class. This is made simplest by setting up some dummy rewriters in our test framework. We need to include rewrite_driver.h due to covariant return of html_parse() We need to include mock_timer.h to allow upcast to Timer*. The httpd header must be after the apache_rewrite_driver_factory.h. Otherwise, the compiler will complain "strtoul_is_not_a_portable_function_use_strtol_instead". for apr_status_t The httpd header must be after the instaweb_context.h. Otherwise, the compiler will complain "strtoul_is_not_a_portable_function_use_strtol_instead". Implementation of ScheduleRewriteController that uses priority queue to process rewrites in the order of most requested. Gurantees that at most one client will be waiting for a given key. Also limits the number of queued rewrites and the number of rewrites running in parallel. Every request is tracked in a Rewrite object, the lifetime of which is described by the following state digram: begin | +--—v--—+ | | Queue full | STOPPED +--------—> delete <------—+ | | | +--—+--—+ | | | ScheduleRewrite() | Other Rewrite needs | +-—+ | slot in queue and | | | ScheduleRewrite() | this is oldest rewrite +--—v-—+-+ | (increments priority | in AWAITING_RETRY. | <–+ discards old request) | | QUEUED | | | <–+ ScheduleRewrite() +–+----—+ +--—+---—+ | (increments priority)| | | +--------------------—+ AWAITING | Pop Queue | RETRY | (when most requested) | | | +-—^--—+ | +--—+ | | | | ScheduleRewrite() | +--—v—+-+ | (increments priority | | | | rejects new request) | | RUNNING <—+ | | | | +–+-—+—+ | | | | | +---------------------------------—+ | NotifyFailure() | +--------------------------—> delete NotifySuccess() RequestResultRpcClient manages the client portion of a gRPC connection. It is the client-side counterpart to RequestResultRpcHandler. See class comments below. RpcHandler for the case there the client uses a streaming RPC to the server to attempt an operation, waits for response and then calls back to let the server know it's done. The first message on the RPC will result in a call to HandleClientRequest(), which the client should use to notify its controller of a request. When the controller decides if it will allow the rewrite to proceed it invokes the provided callback and we return that decision to the client via NotifyClient(). Once the client completes, it sends back a final message which will result in a final call to HandleClientResult(). If the client disconnects after the call to HandleClientRequest() but before the call to HandleClientResult(), we call HandleOperationFailed() to let the subclass know. A set of utility functions for handling character sets/encodings and related concepts like byte-order-marks (BOM). Currently the only methods relate to BOMs. Include this file when defining an object that will reside in a pool. There are a couple of ways of defining such an object, but all of them require us to use the PoolPosition typedef. Most simply, we can extend the PoolElement type defined here—but in practice, we want to avoid multiple inheritance just to store a simple back link, and we're better off providing an accessor at pool construction time instead. Chromium has moved scoped_ptr.h from base directory to base/memory. Thankfully, even older version we built against had it available in base/memory, just with the compatibility alias still available. for size_t for memcpy NOTE: THIS CODE IS DEAD. IT IS ONLY LINKED BY THE SPEED_TEST PROVING IT'S SLOWER THAN FastWildcardGroup, PLUS ITS OWN UNIT TEST. This file defines the MultipleFrame API for reading and writing static and animated images. DO NOT INCLUDE LIBJPEG HEADERS HERE. Doing so causes build errors on Windows. Note: we should not include setjmp.h here, since libpng 1.2 headers include it themselves, and get unhappy if we do it ourselves. This file provides two sets of adapters for use by {Scanline, MultipleFrame} clients wishing to use code provided by the {MultipleFrame, Scanline} classes. Data structure operation helpers for SharedMemCache. See the top of shared_mem_cache.cc for data format descriptions. This contains things that are common between unit tests for Worker and its subclasses, such as runtime creation and various closures. A set of atoms can be constructed very efficiently. Note that iteration over this set will not be in alphabetical order. Defines a predicate function used to select which request-headers to copy. The callback sets the bool* arg (.second) to true if it wants to include the header. The StringPiece is the name of the header. Sometimes some portions of URL space need to be handled differently by dedicated resource subclasses. ResourceProvider callbacks are used to teach RewriteDriver about these, so it knows not to build regular UrlInputResource objects. This enumerates different states of the fetched response. This module is mostly concerned with specific failure statuses, but it's convenient to have non-failure ones in the same enum. See MessageForInlineResult for enum meanings. Used to signal whether optimization was successful or not to RewriteContext::RewriteDone. namespace url Accumulates a decimal value from 'c' into *value. Returns false and leaves *value unchanged if c is not a decimal digit. Accumulates a hex value from 'c' into *value Returns false and leaves *value unchanged if c is not a hex digit. Converts Apache header structure into RequestHeaders, selecting only those for which the predicate sets its bool* argument to true. If the predicate is NULL, then all the headers are transferred. The predicate should be created with NewPermanentCallback and stored in a scoped_ptr<Callback2>, so that it is deleted after this function completes. Converts Apache header structure (request.headers_out) into ResponseHeaders headers. If err_headers is not NULL then request.err_headers_out is copied into it. In the event that headers == err_headers, the headers from request.err_headers_out will be appended to the list of headers, but no merging occurs. Append an arbitrary iterable collection of strings such as a StringSet, StringVector, or StringPieceVector, separated by a given separator, with given initial and final strings. Argument order chosen to be consistent with StrAppend. Append string-like objects accessed through an iterator. Skip a lot of set-up and tear-down in empty case. < No separator before initial element Creates a pool that can be used in any thread, even when run in Apache prefork. 1) This method must be called from startup phase only 2) Each pool must be accessed only from a single thread (or otherwise have its access serialized) 3) Different pools returned by this function may be safely used concurrently. 4) It's OK to just use ap_pool_create to create child pools of this one from multiple threads; those will be re-entrant too (but pools created merely as children of Apache's pools will not be reentrant in prefork) In short, pools returned by this method are not fully threadsafe, but at least they are not thread-hostile, which is what you get with apr_pool_create in Prefork. Note: the above is all about the release version of the pool code, the checking one has some additional locking! WARNING: you must not call apr_pool_clear on the returned pool. The returned pool can be used to create sub-pools that can be accessed in distinct threads, due to a mutex injected into the allocator. However, if you call apr_pool_clear on the returned pool, the allocator's mutex will be freed and the pointer to it will be dangling. Subsequent allocations are likely to crash. Creates a blank image of the given dimensions and type. For now, this is assumed to be an 8-bit 4-channel image transparent image. char to int without sign extension. The following four helper functions were moved here for testability. We ran into problems with sign extension under different compiler versions, and we'd like to catch regressions on that front in the future. Converts time in string format, to the number of milliseconds since 1970. Returns false on failure. Converts time, in microseconds, to a string with accuracy at us. Returns false on failure. Return the number of mismatched chars in two strings. Useful for string comparisons without short-circuiting to prevent timing attacks. See Counts the number of times that substring appears in text Note: for a substring that can overlap itself, it counts not necessarily disjoint occurrences of the substring. For example: "aaa" appears in "aaaaa" 3 times, not once Create a data: url from the given content-type and content. See: The ENCODING indicates how to encode the content; for binary data this is UTF8, for ascii / Latin1 it's LATIN1. If you have ascii without high bits or NULs, use LATIN1. If you have alphanumeric data, use PLAIN (which doesn't encode at all). Note in particular that IE<=7 does not support this, so it makes us UserAgent-dependent. It also pretty much requires outgoing content to be compressed as we tend to base64-encode the content. Decodes a protobuf of type T from the property named 'property_name' in the cohort 'cohort_name' in the given property cache, and makes sure it has not exceeded its TTL of 'cache_ttl_ms' (a value of -1 will disable this check). *status will denote the decoding state; if it's kPropertyCacheDecodeOk then a pointer to a freshly allocated decoded proto is returned; otherwise NULL is returned. *status set by helper Wrapper version of the above function that gets the property cache and the property page from the given driver. Returns PropertyValue object for given cohort and property name, setting *status and returning NULL if any errors were found. Updates caching headers to ensure the resulting response is not cached. Removes any max-age specification, and adds max-age=0, no-cache. Converts ResponseHeaders into Apache request err_headers. This function does not alter the status code or the major/minor version of the Apache request. Appends version of original escaped for JSON string syntax to *escaped, (optionally with quotes, if asked). Warning: this is dangerous if you have non-ASCII characters, in that it doesn't interpret the input encoding, and will just blindly turn them into escapes. However, it will ensure that the output won't have any dangerous characters that can cause format sniff. Appends version of original escaped for JS string syntax, safe for inclusion into HTML, to *escaped, (optionally with quotes, if asked). Returns the index of the start of needle in haystack, or StringPiece::npos if it's not present. Return the charset string for the given contents' BOM if any. If the contents start with one of the BOMs defined above then the corresponding charset is returned, otherwise an empty StringPiece. Generate a list of the critical keys from a proto, storing it into keys. Takes into account legacy keys that may have been added before. A key is considered critical if its support is at least support_percentage of the maximum possible support value (which ramps up as beacon results arrive). When support_percentage = 0, any support is sufficient; when support_percentage = 100 all beacon results must support criticality. Erase shortest substrings in string bracketed by left and right, working from the left. ("[", "]", "abc[def]g[h]i]j[k") -> "abcgi]j[k" Returns the number of substrings erased. Replaces all instances of 'substring' in 's' with 'replacement'. Returns the number of instances replaced. Replacements are not subject to re-matching. NOTE: The string pieces must not overlap 's'. A hash function for strings that can be used both in a case-sensitive and case-insensitive way This implemention is based on code in third_party/chromium/src/base/hash_tables.h. Determines whether the character is a US Ascii number or letter. This is preferable to isalnum() for working with computer languages, as opposed to human languages. Based on the CriticalKeys data seen so far, describe whether beacon metadata is available. This returns false until data is received. Check if given character is an HTML (or CSS) space (not the same as isspace, and not locale-dependent!). Note in particular that isspace always includes '' and HTML does not. See: Tests if c is a standard (non-control) ASCII char 0x20-0x7E. Note: This does not include TAB (0x09), LF (0x0A) or CR (0x0D). Combine two hash values in a reasonable way. Here to avoid excessive mysticism in the remainder of the code. < Uses different prime multipliers. Output a string which is the combination of all values in vector, separated by delim. Does not ignore empty strings in vector. So: JoinStringStar({"foo", "", "bar"}, ", ") == "foo, , bar". (Pseudocode) A custom matcher to match more than 10 arguments allowed by MOCK_METHOD* macros. lower-case a single character and return it. tolower() changes based on locale. We don't want this! Makes a Function* that calls a 0-arg class method, or a 0-arg cancel method. Makes a Function* that calls a 1-arg class method, or a 1-arg cancel method. Makes a Function* that calls a 2-arg class method, or a 2-arg cancel method. Makes a Function* that calls a 3-arg class method, or a 3-arg cancel method. Makes a Function* that calls a 4-arg class method, or a 4-arg cancel method. Splits comma-separated string to elements and tries to match each one with a recognized content type. The out set will be cleared first and must be present. Given a name (file or url), see if it has the canonical extension corresponding to a particular content type. Image owns none of its inputs. All of the arguments to NewImage(...) (the original_contents in particular) must outlive the Image object itself. The intent is that an Image is created in a scoped fashion from an existing known resource. The options should be set via Image::SetOptions after construction, before the image is used for anything but determining its natural dimension size. Given the rolling hash prev of buf[start - 1 : start + n - 2], efficiently compute the hash of buf[start : start + n - 1]. Note that this indexes buf[start - 1], so we can't just use a StringPiece here. We eschew StringPiece in any case, because of efficiency. Note that to get efficient operation here for fixed n (eg when we're doing something like Rabin-Karp string matching), we must inline the computation of shift amounts and then hoist them as loop invariants. That is why this function (intended for use in an inner loop) is inlined. In a reasonable loop, the following two tests should be eliminated based on contextual information, if our compiler is optimizing enough. < rotate left 1 Corner case: shift by >= 64 bits is not defined in C. gcc had better constant-fold this to a rotate! (It appears to.) We inline in large part to ensure the truthiness of this fact. rotate left by shift (equiv to rotating left n times). Optimizer in the style of pagespeed/kernel/image/jpeg_optimizer.h that creates a webp-formatted image in compressed_webp from the jpeg image in original_jpeg. Indicates failure by returning false, in which case compressed_webp may be filled with junk. Extracts mime_type and charset from a string of the form "<mime_type>; charset=<charset>". If mime_type or charset is not specified, they will be populated with the empty string. Returns true if either a mime_type or a charset was extracted. Dismantle a data: url into its component pieces, but do not decode the content. Note that encoded_content will be a substring of the input url and shares its lifetime. Invalidates all outputs if url does not parse. Given a string such as: a b "c d" e 'f g' Parse it into a vector: ["a", "b", "c d", "e", "f g"] NOTE: actually used for html doctype recognition, so assumes HtmlSpace separation. Returns the part of the piece after the first '=', trimming any white space found at the beginning or end of the resulting piece. Returns an empty string if '=' was not found. Given a set of candidate critical keys, decide whether beaconing should take place. We should always beacon if there's new critical key data. Otherwise re-beaconing is based on a time and request interval, and 2 modes of beaconing frequency are supported. At first, beaconing occurs at a high frequency until we have collected kHighFreqBeaconCount beacons; after that, we transition into low frequency beaconing mode, where beaconing occurs less often. We also track the number of expired nonces since the last valid beacon was received to see if beaconing is set up correctly, and if it looks like it isn't, only do low frequency beaconing. Sets status and nonce appropriately in *result (nonce will be empty if no nonce is required). If candidate keys are not required, keys may be empty (but new candidate detection will not occur). If result->status != kDontBeacon, caller should write proto back to the property cache using UpdateInPropertyCache. Reduce the quality of the webp image. Indicates failure by returning false. WebP quality varies from 1 to 100. Original image will be returned if input quality is <1. Converts the ResponseHeaders to the output headers. This function does not alter the status code or the major/minor version of the Apache request. Parse a list of integers into a vector. Empty values are ignored. Returns true if all non-empty values are converted into integers. Split sp into pieces that are separated by any character in the given string of separators, and push those pieces in order onto components. Splits string 'full' using substr by searching it incrementally from left. Empty tokens are removed from the final result. namescape internal Supports 9 or more arguments Return true if str is equal to the concatenation of first and second. Note that this respects case. Parses valid floating point number and returns true if string contains only that floating point number (ignoring leading/trailing whitespace). Note: This also parses hex and exponential float notation. If there are embedded nulls, always fail. NOTE: For a string of the form "45x", this sets *out = 45 but returns false. It sets *out = 0 given "Junk45" or "". Strips any initial UTF-8 BOM (Byte Order Mark) from the given contents. Returns true if a BOM was stripped, false if not. In addition to specifying the encoding in the ContentType header, one can also specify it at the beginning of the file using a Byte Order Mark. Bytes Encoding Form 00 00 FE FF UTF-32, big-endian FF FE 00 00 UTF-32, little-endian FE FF UTF-16, big-endian FF FE UTF-16, little-endian EF BB BF UTF-8 See: In-place removal of multiple levels of leading and trailing quotes, include url-escaped quotes, optionally backslashed. Removes whitespace as well. In-place removal of leading and trailing HTML whitespace. Returns true if any whitespace was trimmed. Non-destructive TrimWhitespace. WARNING: in should not point inside output! < Mutable copy < Modifies temp Update the candidate key set in proto. If new candidate keys are detected, they are inserted into proto with a support value of 0, and true is returned. Otherwise returns false. If clear_rebeacon_timestamp is set, the rebeacon timestamp field in the proto is cleared to force rebeaconing on the next request. Add support for new_set to existing support. The new_set should be obtained from a fully-validated beacon result – this means PrepareForBeaconInsertion should have been called if required, and the resulting nonce should have been checked. If require_prior_support then there must be an existing support entry (possibly 0) for new support to be registered. Updates the property 'property_name' in cohort 'cohort_name' of the property cache managed by the rewrite driver with the new value of the proto T. If 'write_cohort' is true, will also additionally write out the cohort to the cache backing. See also: ./src/third_party/css_parser/src/strings/ascii_ctype.h We probably don't want our core string header file to have a dependecy on the Google CSS parser, so for now we'll write this here: upper-case a single character and return it. toupper() changes based on locale. We don't want this! Check whether the given nonce is valid, invalidating any expired nonce entries we might encounter. To avoid the need to copy and clear the nonce list, we invalidate the entry used and any expired entries by clearing the nonce value and timestamp. These entries will be reused by AddNonceToCriticalSelectors. Update the property cache with a new set of keys. This will update the support value for the new keys. If require_prior_support is set, any keys that are not already present in the property cache will be ignored (to prevent spurious keys from being injected). Note that it only increases the support value for the new keys, it does not decay values that are not present. PrepareForBeaconInsertion should have been called previously if !should_replace_prior_result and nonces must be checked. The amount of time after generating a nonce that we will accept it as valid. This keeps an attacker from accumulating large numbers of valid nonces to send many beacon responses at once. The number of valid beacons received that will switch from high frequency to low frequency beaconing. The multiplier to apply to RewriteOptions::beacon_reinstrument_time_sec() to determine the low frequency beaconing interval. For example, the default value rebeaconing value is 5 seconds, so we will rebeacon every 5 seconds in high frequency mode, and every 500 seconds (~8 minutes) in low frequency mode. Third filter checks headers for cacheability and writes the recorded resource to our cache. Second filter fixes headers to avoid caching by shared proxies. The limit on the number of nonces that can expire before we stop trying to do high frequency beaconing. This is a signal that beacons are not configured correctly and so we drop into low frequency beaconing mode. Quality taken when no quality is passed through flags or when no quality is retrieved from JpegUtils::GetImageQualityFromImage. Per character hash values. Exported for use in NextRollingHash. Rolling hash for char buffers based on a polynomial lookup table. See Size of stack buffer for read-blocks. This can't be too big or it will blow the stack, which may be set small in multi-threaded environments.
http://modpagespeed.com/psol/namespacenet__instaweb.html
CC-MAIN-2017-09
refinedweb
4,002
57.47
I'm writing a program for my dads bar downstairs, and im having trouble. Each section of the program has its own class, and each class has several functions declared within it. I havent been working on it long so its hard to follow the code because it isnt representing a finished thought. The problem is that i need to pass the "Choice" variable from a function in one class to a function in a different class. I remember doing this in CS2 back two years ago, but i cant member anything beyond that. I browesed through the FAQ and even some of my old code, but for some reason im not finding help LOL. Considering the opengl programming experience i have im amazed that i cant figure this out. Heres the sloppy code. [code][code]Code:/* Bartender version 1.0 Build 1.0.0, 12/25/04 Steven Billington Wacker7 Inc. webmaster@wacker7.com */ /* List our include files that we will use throughout the program */ #include <stdio.h> #include <iostream> #include <stdlib.h> #include <string> using namespace std; /* User_Menu class contains all functions and variables the program will use to take in the users menu decision */ class User_Menu { public: void Disp_Options(); void User_Choice(); int Choice; }; class Choice_Catalog { public: void Prompt_Drink(); void Find_Drink(); void Disp_Drink(); string Drink_Name; }; class Choice_NewEntry { public: void Drink_Info(); void Save_Data(); void Disp_Result(); }; class Choice_Random { public: void Random_Drink(); void Disp_Drink(); }; class Choice_ListCatalog { public: void Display_All(); }; class Choice_About { public: void Software_Info(); }; class Choice_Exit { public: void Terminate(); }; /* Search_Catalog class contains all functions and variables that will be used to find specific drink entries in the database */ class Search_Catalog { public: void Load_Catalog(); void Search_For(); void Display(); }; /* Main defines the entry point for our application */ int main(int argc, char *argv[]) { /* Declare class objects*/ User_Menu Menu_Obj; Choice_Catalog Cata_Obj; /* Invoke disply function of User_Menu class*/ Menu_Obj.Disp_Options(); system("pause"); return 0; } /* Declare the Disp_Options function. This function lets us present options to the user */ void User_Menu::Disp_Options() { cout << "Bartender v1.0" << endl; cout << endl; cout << "1. Search Catalog" << endl; cout << "2. New Drink" << endl; cout << "3. Random Selection" << endl; cout << "4. Master Drink List" << endl; cout << "5. About Bartender v1.0" << endl; cout << "6. Exit Program" << endl; /* Call User_Choice function*/ User_Choice(); } /* Declare User_Choice function. With this function we allow the user to make a decision based on the choices we presented */ void User_Menu::User_Choice() { cout << endl << endl; cout << "Enter menu selection (1-6): "; cin >> Choice; cout << endl; switch (Choice) { case 1: Prompt_Drink(); break; case 2: Drink_Info(); break; case 3: Random_Drink(); break; case 4: Display_All(); break; case 5: Software_Info(); break; case 6: Terminate(); break; default: cout << "Invalid Entry, please retry."; cout << endl; Display_Options(); } } void Choice_Catalog::Prompt_Drink() { cout << endl << "Enter drink name: "; cin >> Drink_Name; //was testing something cout << "Name was: " << Drink_Name << endl; }
http://cboard.cprogramming.com/cplusplus-programming/60141-i'm-sucha-newb-classes.html
CC-MAIN-2014-15
refinedweb
458
62.88
A Python wrapper to Magento's XML-RPC API. Project description This is a simple Python interface to Magento’s XML-RPC API. The API discovers and makes all of Magento’s API methods available to you by discovery. Usage from magento import MagentoAPI magento = MagentoAPI("magentohost.com", 80, "test_api_user", "test_api_key") magento.help() # Prints out all resources discovered and available. # cart: create, info, license, order, totals # cart_coupon: add, remove # ... (a bunch of other resources) # sales_order: addComment, cancel, hold, info, list, unhold magento.sales_order.help() # 'sales_order' is a resource. # sales_order: Order API # - addComment: Add comment to order # - cancel: Cancel order # - hold: Hold order # - info: Retrieve order information # - list: Retrieve list of orders by filters # - unhold: Unhold order # Let's list sales and add their subtotals! orders = magento.sales_order.list() subtotals = [order["subtotal"] for order in orders] revenue = sum(subtotals) # Additionally, you can get API metadata from these calls: json_description_of_resources = magento.resources() json_description_of_possible_global_exceptions = magento.global_faults() json_description_of_possible_resource_exceptions = magento.resource_faults("sales_order") The API discovers and makes all of Magento’s API methods available to you by discovery. The best way to learn how to use the API is to play around with it in a Python shell and refer back to the Magento API documentation for docs on the usage of specific methods. Quick IPython Shell The Magento API is massive and takes effort to grok. If you need to use it in some production capacity, you’ll want to jump into a shell frequently and muck around with inputs and stare at outputs. magento-ipython-shell will drop you into an IPython shell that has a variable bound to a MagentoAPI object that is ready for use. The shell requires IPython, which is the bee’s knees. Install it and get it working first. Alternately, spin up a Python shell and instantiate the objects you need. This is just a slightly nicer way to get started mucking around. Here’s how to launch it: > magento-ipython-shell localhost.com 8888 api_user api_key -- magento-ipython-shell ----------------- Connecting to '' Using API user/key api_user/api_key Connected! The 'magento' variable is bound to a usable MagentoAPI instance. -- magento-ipython-shell ----------------- Python 2.7.2 (default, Jun 16 2012, 12:38:40) Type "copyright", "credits" or "license" for more information. IPython 0.13.1 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. In [1]: Now you can mess around with the magento instance. In [1] magento Out[1]: <magento.MagentoAPI at 0x107d3c310> In [2]: magento.help() # Lists all the resources available and their methods. Resources: cart: create, info, license, order, totals cart_coupon: add, remove ... (many more) In [3]: magento.cart.help() # Describes the methods available under a resource. cart: Shopping Cart - create: Create shopping cart - info: Retrieve information about shopping cart - license: Get terms and conditions - order: Create an order from shopping cart - totals: Get total prices for shopping cart In [4]: len(magento.sales_order.list()) # Play around with output. Out[4]: 2 Installation python-magento is on PyPi: - pip install python-magento - easy_install python-magento … or grab this code and run setup.py install Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/python-magento/0.2.6/
CC-MAIN-2019-04
refinedweb
556
51.75
Elixir v0.13.0 released, hex.pm and ElixirConf announced Hello folks! Elixir v0.13.0 has been released. It contains changes that will effectively shape how developers will write Elixir code from now on, making it an important milestone towards v1.0! On this post we are going to cover some of those changes, the road to Elixir v1.0, as well as the announcement of hex.pm. Before we go into the changes, let's briefly talk about ElixirConf! ElixirConf We are excited to announce ElixirConf, the first ever Elixir conference, happening July 25-26, 2014 in Austin, TX. The Call For Proposals is open and we are waiting for your talks! The registration is also open and we hope you will join us on this exciting event. We welcome Elixir developers and enthusiasts that are looking forward to be part of our thrilling community! Summary In a nutshell, here is what new: Elixir now runs on and requires Erlang R17; With Erlang R17, Elixir also adds support for maps, which are key-value data structures that supports pattern matching. We'll explore maps, their features and limitations in this post; Elixir v0.13 also provides structs, an alternative to Elixir records. Structs are more flexible than records, provide faster polymorphic operations, and still provide the same compile-time guarantees many came to love in records; The Getting Started guide was rewritten from scratch. The previous guide was comprised of 7 chapters and was about to become 2 years old. The new guide features 20 chapters, it explores the new maps and structs (which are part of this release), and it goes deeper into topics like IO and File handling. It also includes an extra guide, still in development, about Meta-Programming in Elixir; Elixir v0.13 provides a new comprehension syntax that not only works with lists, but with any Enumerable. The output of a comprehension is also extensible via the Collectableprotocol; Mix, Elixir's build tool, has been improved in order to provide better workflows when compiling projects and working with dependencies; There are many other changes, like the addition of StringIO, support for tags and filters in ExUnit and more. Please check the CHANGELOG for the complete list. Even with all those improvements, Elixir v0.13.0 is backwards compatible with Elixir v0.12.5 and upgrading should be a clean process. Maps Maps are key-value data structures: iex> map = %{"hello" => :world} %{"hello" => :world} iex> map["hello"] :world iex> map[:other] nil Maps do not have a explicit ordering and keys and values can be any term. Maps can be pattern matched on: iex> %{"hello" => world} = map %{"hello" => :world} iex> world :world iex> %{} = map %{"hello" => :world} iex> %{"other" => value} = map ** (MatchError) no match of right hand side value A map pattern will match any map that has all the keys specified in the pattern. The values for the matching keys must also match. For example, %{"hello" => world} will match any map that has the key "hello" and assign the value to world, while %{"hello" => "world"} will match any map that has the key "hello" with value equals to "world". An empty map pattern ( %{}) will match all maps. Developers can use the functions in the Map module to work with maps. For more information on maps and how they compare to other associative data structures in the language, please check the Maps chapter in our new Getting Started guide. Elixir Sips has also released two episodes that cover maps (part 1 and part 2). Maps also provide special syntax for creating, accessing and updating maps with atom keys: iex> user = %{name: "john", age: 27} %{name: "john", age: 27} iex> user.name "john" iex> user = %{user | name: "meg"} %{name: "meg", age: 27} iex> user.name "meg" Both access and update syntax above expect the given keys to exist. Trying to access or update a key that does not exist raises an error: iex> %{ user | address: [] } ** (ArgumentError) argument error :maps.update(:address, [], %{}) As we will see, this functionality becomes very useful when working with structs. Structs Structs are meant to replace Elixir records. Records in Elixir are simply tuples supported by modules which store record metadata: defrecord User, name: nil, age: 0 Internally, this record is represented as the following tuple: # {tag, name, age} {User, nil, 0} Records can also be created and pattern matched on: iex> user = User[name: "john"] User[name: "john", age: 0] iex> user.name "john" iex> User[name: name] = user User[name: "john", age: 0] iex> name "john" Pattern matching works because the record meta-data is stored in the User module which can be accessed when building patterns. However, records came with their own issues. First of all, since records were made of data (the underlying tuple) and a module (functions/behaviour), they were frequently misused as an attempt to bundle data and behaviour together in Elixir, for example: defrecord User, name: nil, age: 0 do def first_name(self) do self.name |> String.split |> Enum.at(0) end end User[name: "john doe"].first_name #=> "john" Not only that, records were often slow in protocol dispatches because every tuple can potentially be a record, sometimes leading to expensive checks at runtime. Since maps are meant to replace many cases of records in Erlang, we saw with the introduction of maps the perfect opportunity to revisit Elixir records as well. In order to understand the reasoning behind structs, let's list the features we got from Elixir records: - A way to organize data by fields - Efficient in-memory representation and operations - Compile-time structures with compile-time errors - The basic foundation for polymorphism in Elixir Maps naturally solve issues 1. and 2. above. In particular, maps that have same keys share the same key-space in memory. That's why the update operation %{map | ...} we have seen above is relevant: if we know we are updating an existing key, the new map created as result of the update operation can share the same key space as the old map without extra checks. For more details on why Maps are efficient, I would recommend reading Joe's blog post on the matter. Structs were added to address features 3. and 4.. A struct needs to be explicitly defined via defstruct: defmodule User do defstruct name: nil, age: 0 end Now a User struct can be created without a need to explicitly list all necessary fields: iex> user = %User{name: "john"} %User{name: "john", age: 0} Trying to create a struct with an unknown key raises an error during compilation: iex> user = %User{address: []} ** (CompileError) unknown key :address for struct User Furthermore, every struct has a __struct__ field which contains the struct name: iex> user.__struct__ User The __struct__ field is also used for polymorphic dispatch in protocols, addressing issue 4.. It is interesting to note that structs solve both drawbacks we have earlier mentioned regarding records. Structs are purely data and polymorphic dispatch is now faster and more robust as it happens only for explicitly tagged structs. For more information on structs, check out the Structs chapter in the getting started guide (you may also want to read the new Protocols chapter after it). Maps, structs and the future With the introduction of maps and structs, some deprecations will arrive on upcoming releases. First of all, the ListDict data structure is being deprecated and phased out. Records are also being deprecated from the language, although it is going to be a longer process, as many projects and Elixir itself still use records in diverse occasions. Note though only Elixir records are being deprecated. Erlang records, which are basically syntax sugar around tuples, will remain in the language for the rare cases Elixir developers need to interact with Erlang libraries that provide records. In particular, the Record has been updated to provide the new Record API (while keeping the old one for backwards compatibility). Finally, structs are still in active development and new features, like @derive, should land in upcoming Elixir releases. For those interested, the original maps and structs proposal is still availble. Comprehensions Erlang R17 also introduced recursion to anonymous functions. This feature, while still not available from Elixir, allows Elixir to provide a more flexible and extensible comprehension syntax. The most common use case of a comprehension are list comprehensions. For example, we can get all the square values of elements in a list as follows: iex> for n <- [1, 2, 3, 4], do: n * n [1, 4, 9, 16] We say the n <- [1, 2, 3, 4] part is a comprehension generator. In previous Elixir versions, Elixir supported only lists in generators. In Elixir v0.13.0, any Enumerable is supported (ranges, maps, etc): iex> for n <- 1..4, do: n * n [1, 4, 9, 16] As in previous Elixir versions, there is also support for a bitstring generator. In the example below, we receive a stream of RGB pixels as a binary and break it down into triplets: iex> pixels = <<213, 45, 132, 64, 76, 32, 76, 0, 0, 234, 32, 15>> iex> for <<r::8, g::8, b::8 <- pixels>>, do: {r, g, b} [{213,45,132}, {64,76,32}, {76,0,0}, {234,32,15}] By default, a comprehension returns a list as a result. However the result of a comprehension can be inserted into different data structures by passing the :into option. For example, we can use bitstring generators with the :into option to easily remove all spaces in a string: iex> for <<c <- " hello world ">>, c != ?\s, into: "", do: <<c>> "helloworld" Sets, maps and other dictionaries can also be given with the :into option. In general, the :into accepts any structure as long as it implements the Collectable protocol. For example, the IO module provides streams, that are both Enumerable and Collectable. You can implement an echo terminal that returns whatever is typed into the shell, but in upcase, using comprehensions: iex> stream = IO.stream(:stdio, :line) iex> for line <- stream, into: stream do ...> String.upcase(line) <> "\n" ...> end This makes comprehensions useful not only for working with in-memory collections but also with files, io devices, and other sources. In future releases, we will continue exploring how to make comprehensions more expressive, following in the footsteps of other functional programming research on the topic (like Comprehensive Comprehensions and Parallel Comprehensions). Mix workflows The last big change we want to discuss in this release are the improvements done to Mix, Elixir's build tool. Mix is an essential tool to Elixir developers and helps developers to compile their projects, manage their dependencies, run tests and so on. In previous releases, Mix was used to download and compile dependencies per environment. That meant the usual workflow was less than ideal: every time a dependency was updated, developers had to explicitly fetch and compile the dependencies for each environment. The workflow would be something like: $ mix deps.get $ mix compile $ MIX_ENV=test mix deps.get $ mix test In Elixir v0.13, mix deps.get only fetches dependencies and it does so accross all environments (unless an --only flag is specified). To support this new behaviour, dependencies now support the :only option: def deps do [{:ecto, github: "elixir-lang/ecto"}, {:hackney, github: "benoitc/hackney", only: [:test]}] end Dependencies now are also automatically compiled before you run a command. For example, mix compile will automatically compile pending dependencies for the current environment. mix test will do the same for test dependencies and so on, interrupting less the developer workflow. hex.pm This release also marks the announcement of hex.pm, a package manager for the Erlang VM. Hex allows you to package and publish your projects while fetching them and performing dependency resolution in your applications. Currently Hex only integrates with Mix and contributions to extend it to other tools and other languages in the Erlang VM are welcome! The next steps As seen in this announcement, this release dictates many of the developments that will happen in Elixir and its community in the following weeks. All projects are recommended to start moving from records to structs, paving the way for the deprecation of records before 1.0. The next months will also focus on integrating Elixir more tightly to OTP. During the keynote at Erlang Factory, Catalyse Change, Dave Thomas and I argued that there are many useful patterns, re-implemented everyday by developers, that could make development more productive within the Erlang VM if exposed accordingly. That said, in the next months we plan to: - Integrate applications configuration (provided by OTP) right into Mix; - Provide an Elixir logger that knows how to print and format Elixir exceptions and stacktraces; - Properly expose the functionality provided by Applications, Supervisors, GenServers and GenEvents and study how they can integrate with Elixir. For example, how to consume events from GenEvent as a stream of data? - Study how patterns like tasks and agents can be integrated into the language, often picking up the lessons learned by libraries like e2 and functionality exposed by OTP itself; - Rewrite the Mix and ExUnit guides to focus on applications and OTP as a whole, rebranding it to "Building Apps with Mix and OTP"; You can learn more about Elixir in our Getting Started guide and download this release in the v0.13 announcement. We hope to see you at ElixirConf as well as pushing your packages to hex.pm.
http://elixir-lang.org/blog/2014/04/21/elixir-v0-13-0-released/
CC-MAIN-2015-35
refinedweb
2,232
59.43
Marker Interface Isn't a Pattern or a Good Idea Marker Interface Isn't a Pattern or a Good Idea The so-called 'marker interface' is a clever hack, and it's understandable why they saw widespread use. But the world came to view it as suboptimal and even a mistake. Join the DZone community and get the full member experience.Join For Free Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar. Today, I have the unique opportunity to show you the shortest, easiest code sample of all time. I'm talking about the so-called marker interface. Want to see it? Here you go. public interface IContainSensitiveInformation { } I told you it was simple. It's dead simple for a code sample, so that makes it mind-blowingly simple for a design pattern. And that's how people classify it — as a design pattern. How Is This "Marker Interface" Even a Pattern? As you've inferred from the title, I'm going to go on to make the claim that this is not, in fact, a "design pattern" (or even a good idea). But before I do that, I should explain what this is and why anyone would do it. After all, if you've never seen this before, I can forgive you for thinking it's pretty, well, useless. But it's actually clever, after a fashion. The interface itself does nothing, as advertised. Instead, it serves as metadata for types that "implement" it. For example, consider this class. public class Customer : IContainSensitiveInformation { public string LastName { get; set; } public string SocialSecurityName { get; set; } public string CreditCardNumber { get; set; } } The customer class doesn't implement the interface. It has no behavior, so the idea of implementing it is nonsense. Instead, the customer class uses the interface to signify something to the client code using it. It marks itself as containing sensitive information, using the interface as a sort of metadata. Users of the class and marker interface then consume it with code resembling the following: public void SaveCustomer(Customer customer) { if (customer is IContainSensitiveInformation) _secureService.SaveCustomer(customer); else _regularService.SaveCustomer(customer); } Using this scheme, you can opt your classes into special external processing. Marker Interface Backstory I'm posting code examples in C#, but the marker interface actually goes back a long way. In fact, it goes back to the earlier days of Java, which baked it in as a first-class concept, kind of how C# contains a first-class implementation of the iterator design pattern. In Java, concepts like serialize and clone came via marker interfaces. If you wanted serialization in Java, for instance, you'd tag your class by "implementing" the marker interface Serializable. Then, third-party processing code, such as ORMs, IoC containers, and others would make decisions about how to process it. This became common enough practice that a wide ecosystem of tools and frameworks agreed on the practice by convention. C# did not really follow suit. But an awful lot of people have played in both sandboxes over the years, carrying this practice into the .NET world. In C#, you'll see two flavors of this. First, you have the classic marker interface, wherein people use it the way that I showed above. Secondly, you have situations where people get clever with complex interface inheritance schemes and generics in order to force certain constraints on clients. I won't directly address that second, more complex use case, but note that all of my forthcoming arguments apply to it as well. Now, speaking of arguments, let's get to why I submit that this is neither a "pattern" nor a good idea in modern OOP. Marker Interfaces Aren't Actually Interfaces Marker interfaces aren't actually interfaces. At least, they aren't according to the common definition and understanding of the term: An interface is defined as a syntactical contract that all the classes [implementing] the interface should follow. The interface defines the 'what' part of the syntactical contract and the deriving classes define the 'how' part of the syntactical contract. An interface defines a contract between implementers and consumers — except a marker interface because a marker interface defines nothing but itself. So, right out of the gate, the marker interface fails at the basic purpose of being an interface. Like an Open() method that actually closes a connection, the empty interface violates the principle of least astonishment. When you see that a class implements some interface, you think to yourself that it's adopting the behavior of that interface. You probably don't think, "Oh, I'm sure it just does that to signal out of band to some implementer that it should treat this object differently." The marker interface is clever, which confounds users and creates a steep learning curve. Procedural OOP Now, at this point, you might say that any design pattern has a learning curve, and that's fair enough. When you have a truly sophisticated or elegant solution, that may create a learning curve worth the price tag. But that's not the case with the marker interface. It confuses while causing other problems. In the object-oriented world, polymorphism is a powerful concept that can require some acclimation. As a result, people tend to create pseudo versions of it, often using an enum property called "Type" or something similar to differentiate between flavors of an object. The problem here is that doing this forces the decision logic out of the object itself and onto every consumer. The internet has plenty of articles that demonstrate this problem and how to fix it. The marker interface is really just a variant on this same theme. It's procedural programming in an OO language. Go back up and look at my code example of consuming the marker interface. When I use the IContainSensitiveInformation interface, I force decisions about how to handle this object on every single consumer of it, creating a maintenance headache. This is the same problem as using enums as a poor man's typing system. Forced Casting (or Reflection) In a similar vein, this approach creates another issue. I wrote years ago about casting as a polymorphism fail. In it, I quoted Eric Lippert, formerly one of the C# language authors, about casting. He described two flavors of casting and summarized the problems with them this way. The first kind of cast raises the question "why exactly is it that the developer knows something that the compiler doesn't?" The second kind of cast raises the question "why isn't the operation being done in the target data type in the first place?" The marker interface raises the second question since you're using a cast to inquire about the nature of the object. Why aren't you worrying about the business of sensitive information in the object containing it? If you did that, you'd have a single source of truth for the logic, at least in terms of the "what" of the interface. For instance, why not have the interface contain a method called GetEncodedReprestation() or something like that? Then anything implementing it would have to define how to handle sensitive information, instead of leaving it up to clients. With casting and this sort of introspection are smells, you're not directly using the language to accomplish what you want at compile time, relying instead on out of band knowledge at build time. If you're going to use introspection in this fashion, the .NET framework guidelines tell you to avoid marker interfaces and use attributes instead. Needless Coupling Let's look to the wonky world of code statistics for a moment. We've seen that the marker interface forces counterintuitive out of band knowledge and encourages procedural programming. But what about its effect on your code? At the type level, this promotes the very thing interfaces aim to alleviate: tight coupling. Let's say that, in the broadest terms, I have classes that will contain sensitive data, and I need to define a way to handle that situation. With the marker interface, I spread this handling across three types: the marker interface, the type "implementing" it, and the type deciding how to handle it. Then, as the codebase grows, I may have to define more and more types that understand how to handle it and keep them consistent. This invites a codebase in which you must perform shotgun surgery. The logic for handling sensitive information leaks out into the broader codebase and spreads itself across multiple types. You now have a tight coupling among these types, with all of them teaming up to handle the concern of sensitive information. With the use of a proper interface, you avoid this problem. The types just understand through the interface that some theoretical means of handling the information exists, and the implementer alone worries about how to handle it. Marker Interface Isn't a Pattern Hopefully, by teaming up with a C# language author and the .NET framework guidelines, I've convinced you to stay away from empty/marker interfaces. I've certainly made my case for why I think they're a bad idea. But why do I say that they aren't even a pattern? Well, people generally define design patterns as solutions to software design problems that you see in the real world. But a marker interface isn't a solution to anything. And I say this not out of contempt for the approach or to try to poison the well. I also don't consider interfaces a design pattern. They just don't rise to that level. Declare a marker interface (or a regular one), and all you've done is define the problem rather than solve it. So don't worry about whether something is a "design pattern" or not. That categorization tends to elevate a technique almost to the level of Stack Overflow badge, even when the "pattern" isn't a good idea. The fact that some people refer to them as a design pattern or the fact that they saw broad use in the earlier days of Java doesn't mean they make sense for you in 2017 C#. It's a clever hack, and it's understandable why they saw widespread use. But the world came to view it as suboptimal and even a mistake, and now it persists mainly because of massive legacy dependency. Don't make that mistake in your code. Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar. Published at DZone with permission of Erik Dietrich , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/marker-interface-isnt-a-pattern-or-a-good-idea
CC-MAIN-2018-30
refinedweb
1,833
55.64
Detecting Not Modified Reliably Yesterday, I more fully integrated Joe’s threading work into Venus. From an end user’s perspective, one benefit of this is that the first time you specify spider_threads, you will see immediate benefit as the Last-Modified and Etag header values that had been previously captured and stored in the Venus cache will be used. With this change, the HttpLib2 cache becomes optional, but may soon provide additional benefits. In debugging this, I took at look at ETag and Last-Modified usage, and found a few surprises. Sure, I found a few sites that provided neither, yet would return the same data, byte for byte, again and again. Most of these sites appeared to compute their feeds dynamically on each request; this includes sites such as IBM Developer Works: example. As I said, this wasn’t a surprise. Some sites, most typically ones powered by WordPress, would provide both ETag and LastModified headers, but would always provide the full content if an If-None-Match header was provided, but would respect If-Modified-Since, but only if If-None-Match was not provided. These sites are typically ones powered by WordPress: example. Anne’s feed also falls into this category, but I can’t determine how it was produced. While that was surprising, even more puzzling is the fact that there are some feeds out there that intermittently support Etags. And by intermittently, I do mean occasionally, like anywhere from one time in two to about one time in four or less. All such feeds that I could find come from blogs.msdn.com: example. You can verify this yourself with the following Python 2.4 script. Simply pass one or more URIs as command line parameters: import urllib2, sys for uri in sys.argv[1:]: if uri.startswith('-'): continue headers = urllib2.urlopen(uri).headers request = urllib2.Request(uri) if headers.has_key('etag') and '-e' not in sys.argv: request.add_header('If-None-Match',headers.get('etag')) if headers.has_key('last-modified') and '-m' not in sys.argv: request.add_header('If-Modified-Since',headers.get('last-modified')) try: print uri, urllib2.urlopen(request).code except urllib2.HTTPError,e: print e Additionally, you can specify either or both of -e and -m to cause the associated header to be omitted. What you want to see is HTTP Error 304: Not Modified. If, instead, you simply see 200, then the full content was sent both times. Conclusions Recommendation to feed producers: don’t send Etag and Last-Modified headers unless you really mean it. But if you can support it, please do. It will save you some bandwidth and your readers some processing. And to feed consumers, while supporting these headers can save you bandwidth, computing a hash on the content may save you processing time. I’ve now implemented this for Venus. Update: WordPress ETag bug [via David Terrell]
http://www.intertwingly.net/blog/2006/11/22/Detecting-Not-Modified-Reliably
CC-MAIN-2020-10
refinedweb
483
58.28
Hi all I've been using PHPStorm since a few weeks now, got my personal license today. However, the last few days I've started noticing an annoying freezing: When I commant+tab between windows and later return (could be within seconds) to my PHPStorm window, it hangs for a second or two. It will freeze and not accept any input. Input entered during the freeze, wil appear on the screen when the program continues. Does anyone recognize this? Is it a bug or something? Hi all Could you please file a new issue to our tracker? Would you specify in the issue description the following information: The versions of your PhpStorm, OS and Java. You can copy (Cmd+C) these data from the menu Help | About, and paste (Cmd+V) it to the ticket. Where (on HDD/SSD or on remote disk) your project files are located? Are there any remote storage or symlinks involved to your project? Does the issue affect only one project? If YES: Could you please attach your project to the ticket? You can provide these files privately (mark them visible to “idea-developers”). Please also attach a zip of Help | Reveal log in Finder. Would you also follow the manual How to take thread dumps and provide a thread dump of the frozen app? Please, attach gotten dump to the ticket in YouTrack. Submitted here:
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207053935-PHPStorm-freezing-when-switching-between-windows-Mac-OS-X-?page=1
CC-MAIN-2019-39
refinedweb
231
75
Yet another application settings helper. Project description Yet another application settings helper. Rationale It’s a common practice to put a settings file in a distribution package with some predefined stuff which can be overridden later in global project’s settings. But there is also a good reason to separate all your settings within your apps just like you do so with common python code: models, views, etc. That’s not a big thing if your project doesn’t come with dozen apps, but if it does, flushing out non-project stuff is a good way to not mess things around and keep them way simple. Quickstart Lets say you have an email service application in your project dir and it stores some configuration in settings.py: MYEMAILSERVICE_USERNAME = 'username' MYEMAILSERVICE_PASSWORD = 'password' ... It’s a big temptation to write short USERNAME, but you have to use prefixes to prevent conflicts with other application settings and tell collaborators this is for the email app. And then: # emails/foo.py from django.conf import settings. service = Service(username=settings.MYEMAILSERVICE_USERNAME, password=settings.MYEMAILSERVICE_PASSWORD, ...) Prefixes are everywhere. You always have to say MYEMAILSERVICE_ in every single place you need to access the settings. How about this one: # Package settings in emails/conf.py from pkgconf import Conf class MyEmailService(Conf): USERNAME = 'username' PASSWORD = 'password' @property def DEBUG(self): return self.USERNAME.startswith('test_') There is nothing more to say: # emails/foo.py # Note: your MyEmailService class becomes a module, # you import it directly from . import conf service = Service(username=conf.USERNAME, password=conf.PASSWORD, ...) django-pkgconf wraps your application settings and provides a handy shortcut. But what about test or dev settings? Just define them like you always do (configuration class name becomes a prefix). Old style: # local_settings.py MYEMAILSERVICE_USERNAME = 'test_username' Cool style (django-configurations way): # settings.py class Prod(Configuration): # No email service settings at all pass class Test(Prod): MYEMAILSERVICE_USERNAME = 'test_username' It looks for the required setting in django’s configuration file first and returns original value if it’s not overridden: # emails/foo.py from . import conf conf.USERNAME # 'test_username' conf.PASSWORD # 'password' - returns original value conf.DEBUG # True Since 0.3.0 mixins are supported: from pkgconf import Conf class FacebookMixin: FACEBOOK_APP_ID = 'foo' FACEBOOK_SECRET = 'bar' class TwitterMixin: TWITTER_APP_ID = 'foo' TWITTER_SECRET = 'bar' class InstagramMixin: INSTAGRAM_APP_ID = 'foo' INSTAGRAM_SECRET = 'bar' class SocialAppConf(FacebookMixin, TwitterMixin, InstagramMixin, Conf): DEBUG = True Installation Install the package: pip install -U django-pkgconf Read the quickstart. Powered siblings There are more advanced apps with (probably) some extra (better?) options: django-appsettings, django-appconf, etc. The reason I’ve published this one is those apps are too big and tricky to do this little work, so you might prefer them instead. Limitations - Due to the code simplicity, property descriptor is used to get data from the settings. That means you can not set (or change) configuration values in action. I don’t know why you would do that, but I have to warn you. - Since your app’s settings are defined in separate file, they are not accessible via django.conf.settings (until you override them in your project settings). This application doesn’t create backward compatibility links. You should always use package configuration module. Changelog v0.4.0 - Python 3 only v0.3.0 - Added mixins support. v0.2.1 - 0.2.2 - Added import * support. - __prefix__ is generated automatically if not presented in class. That may help to build graceful exceptions like 'foo_value' was not found in MYAPP_FOO_SETTING. v0.2.0 - Added __prefix__ attribute to support prefix-names with underscores. - Added instance method and property support. - Backward incompatible change: functions must have self as the first argument now. v0.1.0 - First public release Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-pkgconf/0.4.0/
CC-MAIN-2020-29
refinedweb
639
51.14
Webforms to MVC: Handling the User Control Transition Many of the newer breed of .NET developers have entered a world where they only know ASP.NET MVC as a way of developing complex web applications; there are, however, still a great many of the slightly older generation out there who are stuck in Webforms land. This generation is being dragged kicking and screaming towards ASP.NET MVC and the biggest overwhelming question I see consistently get asked in the forums and news groups is Where the heck are my user controls? First, a Very Brief Webforms Refresher For those readers who have never been exposed to webforms, this was Microsoft's first attempt at creating a rapid web development paradigm in previous versions of Visual Studio and IIS. The idea behind webforms was to make the development model as close as possible to the existing Winforms desktop development one, so that developers currently working on desktop applications could make the translation to writing web apps as smooth and as fluid as possible. The idea worked, and had immense success. Almost overnight, an army of early Windows developers who knew nothing about creating applications for the web became efficient web developers, and life was good. Webforms worked because, if you choose to, you would never have to write a line of code ever again. It was absolutely possible to create an entire application by doing nothing more than dragging text boxes and controls from a tool kit onto your web page. These components then could be linked together by setting properties that allowed things to just work. You could drop databases in, XML files, even components that would create site navigation for you automatically based on the pages you created. Oh, and let's not forget master pages: Create a template, mark sections to be filled in with your own content, and Bob's your uncle. All of this was powered by something called the Postback model and the ASP.NET Page lifetime cycle. So, Why Did We Abandon It in Favour of MVC? Although the model was a good one, it had many flaws. Postback, for example, was actually quite a slow process. Then there where all the various methods of keeping state, such as the Session and ViewState models. If you got ViewState wrong, for example, it wasn't impossible to make the amount of data your web server had to return to the client be twice the size of the actual content in the page. If you wanted to do anything even moderately different to the subscribed way of doing things, you had to understand the page lifetime cycle implicitly and in great detail. As businesses became more and more web aware, they wanted development times shortened and made less costly, they wanted agile processes that meant they could continuously ship, and it was clear that WebForms was starting to fall behind. Introducing MVC In late 2007, Microsoft introduced a new development model on the world, one intended to replace webforms as the mainstream web development platform. Initially, uptake was very slow, and it wasn't until about version 3 that things really started to take off. Today, we're now at version 5 with a bright new future going forward into Microsoft's vNext strategy, which actually promises to bring some elements of webforms back into play. On the whole of it all, MVC is now the platform of choice, but there are still a great many businesses out there who still have WebForms based business solutions in play. For the most part, these solutions work well for them, and many of them see little need to upgrade. There is, however, always a need to add new parts onto these applications, or expand them to do something they were never intended to do. In many cases, the developers working with these applications have more or less gotten to grips with the difference between page-behind code and controller-based code, but many of them still have not gotten to grips with the loss of drag & drop components, especially those who have libraries and libraries of user-created component libraries for specialist tasks. So, Can MVC Do 'User Controls'? The answer to that is both a yes and a no. First, let's start with the no. MVC Does not have the concept of a tool panel like webforms does, so the ability to drag controls from a toolbox onto your web page and wire them together with properties is long gone. MVC also does not understand things like ViewState or page postback, so the common pattern of having a button call a single function in your page code and then respond to that depending on the stage in the page lifecycle has also vanished. Instead, these two scenarios have been replaced by "HTML Partials" and "Controller Actions" In MVC, you have a controller method. This method is responsible for drawing the entire page, providing its data and is specific to a particular http request such as POST or GET. In webforms, when a button was clicked, this click immediately called back into a method in a single class. This method had the ability to change state/data and indeed ANY exposed property available within the web page the button was contained on. This was because the entire page was a single class, and like any class any properties, objects, or data contained within where instantly accessible to all elements of that class. Because MVC treats each request as a completely new request, it's not possible to directly manipulate the page contents from C# or VB any longer. Moving onto the yes, however... Microsoft realised it they would need some method of allowing controls to be added to a page, and a method to allow those controls to interact with the page in a not too dissimilar a manner to the webforms' postback model. To address this need, Microsoft used a controller-based model, where clicks on buttons and other elements were handled by client-side libraries such as JQuery. Now, instead of adding a button, and being able to double-click it to create code to handle it, you now need to use JavaScript to make either an Ajax-based partial call of a full page request using something like an anchor tag. Using this method meant that each individual button could be given its own dedicated slice of code in the controller, and, if done correctly, you could emulate the postback model using partial post requests to suitably marked methods in the controller class. A Simple Example Based on what was said above, the following WebForms code: Default.aspx <%@ Page <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Button <asp:Label </div> </form> </body> </html> Default.aspx.cs using System; namespace webui { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void btnTestButton_Click(object sender, EventArgs e) { lblTestLabel.Text = "I'm a label. You changed me when you clicked the button." } } } would be equivalent to the following MVC code: Index.cshtml <!DOCTYPE html> <html> <head> <title>Index</title> <script src="scripts/jquery.js"></script> <script type="text/JavaScript"> $(document).ready(function(){ $('#btnTestButton').on('click', function(){ $.get("@Url.Action("Button","Home")"); }); }); </script> </head> <body> <div> <button id="btnTestButton3">Click me</button> <p id="lblTestLabel">@Viewbag.TestLabel</p> </div> </body> </html> IndexController.cs using System.Web.Mvc; namespace webui.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewBag.TestLabel = "I'm a label. How do you do?"; return View(); } public ActionResult Button() { ViewBag.TestLabel = "I'm a label. You changed me when you clicked the button."; return View("index"); } } } You can see straight away that the click requires two methods to handle it. Think of the first part as your page load, and the second part as your click handler. I just used a simple get request above, but you can post, put, head, or any other HTTP verb you fancy using. You would quite often also typically use one 'cshtml' file per method rather than share them as I have here. I've deliberately kept the example as simple as possible to illustrate the key concepts. You now should have a slightly better mental picture of how MVC responds to actions in your HTML compared to how Webforms does. So, What About User Controls? Finally, we get to the part of this article that everything has been leading up to. Just as with Postback, there is a replacement in MVC for User controls; this replacement is called "Html Partials" As you saw previously, a controller is linked to an HTML page. These HTML pages are known in MVC speak as "Views". A view is composed by the MVC engine when a page is requested via a controller, and then that view is presented either as a whole HTML page or as just a partial fragment of HTML data. I'm not going to go into the full page model, but this composition is made up from "Layouts" that are the MVC equivalent of "Webforms Master Pages" and the regular views which are the same as Webforms pages that only have ASP:PlaceHolders in them marking sections where your code should be inserted. If you look at the folder structure of a typical MVC project in Visual Studio, you should see something like the following: Figure 1: The folder structure in Visual Studio In the controllers folder, you have a 'HomeController' class. This class has three methods: 'Index', 'About', and 'Contact'. Each of these three methods has a corresponding view in the "Views->Home" folder. MVC will automatically look for a view that matches a method name when composing a view, so by browsing to: /home/index ASP.NET will automatically find the file index.cshtml within the correct folder. Then, it will use that to compose the page to return. In our case, you'll see that we have a _layout.cshtml in the "Shared" folder (and the appropriate commands in the views to use it), so ASP.NET MVC will automatically combine this layout, with the index file to produce the page. If you look in the shared folder, however, you'll also see that there is a file called '_LoginPartial.cshtml'; this is a further snippet of HTML code with MVC processing directives in that can be reused and inserted into a page at any time. In our case, if you open the _Layout file, you'll see the following lines: > This '@Html.Partial' MVC Razor directive will insert ANY content it finds in that _LoginPartial file into your output page at the point where it appears. If we take a look at _LoginPartial, you'll see it's no different to the other view files, except for the fact it lacks the various processing directives to set things like the Layout. > } Just like a user control in WebForms, the main part that gets included in your document is nothing more than ASP.NET mark up (in this case, using the Razor syntax). The code to drive it, however, will usually remain in the main controller where you wanth to use it. And that's pretty much all there is to the very basics of moving from the user control model of WebForms to ASP.NET MVC. If you want your partials to have their own controller classes behind them, then that is also possible. MVC provides HTML methods that allow you to insert a partial directly from a controller action if that's what you need to do; however, the recommended way of providing interactivity on an HTML Partial level is to build light weight controllers that receive and return JSON data. Doing things this way means that, once the page is composed, you only ever need to do a full reload if a full page call is requested, for the rest (such as dynamic data updates) you can simply use JavaScript code inside the rendered view to grab the data and update only the parts needed.. Want me to write about something that's confusing you, or do you have some tips & tricks you'd like to share? Let me know in the comments below, of come hunt me down on Twitter (@shawty_ds) and I'll see what we can do. Why is MVC better?Posted by Alan B on 05/24/2016 02:48pm Separation of concerns, and unit testing.Reply Old dog, new tricksPosted by Larry on 03/17/2016 01:25pm I started out 30+ years ago developing on IBM mainframes and eventually would my way around to client server, then classic ASP, then .Net Webforms, and now MVC. Initially I had a similar negative and perplexed reaction to MVC as I've been reading here. But once I understood the MVC pattern and stopped fighting it I have come to really enjoy using it. Its my preference. Whenever I have to go back and do some work on a Webforms project I find myself cursing and hating it. Combined with Entity Frameworks and LINQ its extremely easy to layout and code really complex data-intensive websites. I can make MVC websites do almost anything I can dream up. Its really a very good pattern. But most of all, it frees you up from the crazy page lifecycle concerns... the bane of complex webforms that contain a lot of user controls.Reply Well heloooooo worldPosted by Dhaval Shah on 02/22/2016 08:16am I have been a classic asp .Net developer for a zillion years now, and have just started to read (only) about MVC. A few days in and I am still struggling to answer a simple question "What for?". I can code a "Hello world" in seconds, which the new-beaut NVC will take God knows how many files in the project, and the brilliant "scaffolding". And then there's a hidden layer in MVC too (ViewModel). ah! brilliant!!Reply Software Life Cycle will eliminate MVCPosted by Harry on 08/25/2015 07:18am 99% of time will be in maintaining the software after it's created. Look how difficult the MVC is to be changed and maintained. It looks like back to thousands years ago, into barbarian again. And you still call it as Agile development? very sarcastic.Reply user control + .jsPosted by AleD on 07/16/2015 10:33am Great post, but how implement functionally equivalent of user control whit js attached via IScriptControl (as in .ascx with webforms)? In order to have a real object in client side defined with its own .js and its id and so on.Reply KingPosted by Corby on 07/15/2015 12:53am I am the classic web forms developer; I came from a desktop background where I developed user controls. Introduce web forms, I learned how to create user controls and after many years I became pretty good at, and even have libraries of code around it. I have looked at MVC, I see how many jobs are going that way, but I like Web Forms and server controls. I do not see a benefit from the lack of a page life cycle. I guess it the old dog new tricks part, but if someone pays me to do something I have to develop in web forms because that is what I am best at. A server control is basically a view of the data so I have yet to see any benefit to why MVC is better. To each their own but I would rather see improvements in web forms than abandon everything I know and start over seems pointless to me.Reply You gotta be kidding me...Posted by Jerry on 12/01/2014 02:00pm You say: ." Need not be difficult? Designing a complex entry form, for instance, is not difficult writing it in HTML? Especially with field-level validation forcing on-the-fly changes to other fields? You must only design simple presentation pages with little input/validation. Maintaining will eliminate MVCPosted by Harry on 08/25/2015 07:22am 99% time will be spent in maintaining the software after it's created. MVC seems to drag you back to thousands of years ago, back to barbarian time.Reply RE: You gotta be kidding me...Posted by Peter Shaw on 12/03/2014 12:15pm Jerry: Yes, absolutely, once you get into the HTML way of doing things, by writing HTML & Designing forms directly then it's not that hard. It's even easier for those who only ever done things the HTML way. Re-read the entire post again though, only this time imagine that your one of the many 10s of thousands of web-forms programmers out there in the world today, one of the many 10s or thousands that have NEVER touched a line of HTML in their life!! This might seem like a distant impossibility it today's world where HTML and JavaScript are the all ruling king, but it's really not that long ago when .NET developers where churning this stuff out at a phenomenal rate, and many of them where using nothing more than drag & drop to create the interface, whilst only ever programming in C# or VB. Even today, there are still companies, who's entire LOB platform is built on WebForms, and who's developers never see the HTML side of things. Down in the battle lines of enterprise things are often very different to how the appear from the outside. If you where/are in a WebForms shop where you have/had the capability to dig direct into the HTML, and had the time to both learn it and understand it, after being used to nothing more than the desktop WinForms way of doing things, then you where/are in a very privileged position indeed, for the vast majority however it's drag & drop all the way, with stupidly short development/deployment times.Reply
https://www.codeguru.com/columns/dotnet/webforms-to-mvc-handling-the-user-control-transition.html
CC-MAIN-2019-26
refinedweb
3,005
60.85
Suppose we have a 2d matrix and some other values like row, col, erow0, ecol0, erow1, and ecol1. If our current position is matrix [row, col] and we want to pick up gold that is at matrix [erow0, ecol0] and matrix [erow1, ecol1]. We can go up, down, left, and right but when we are at a cell (r, c), we have to pay cost matrix [r, c], although if we land at a cell more than once, we do not need to pay cost for that cell again. We have to find the minimum cost to pick up gold at both locations. So, if the input is like row = 0, col = 0, erow0 = 0, ecol0 = 3, erow1 = 2, ecol1 = 2, then the output will be 8, as we are at (0, 0) and want to pick gold from location (0, 3) and (2, 2). So first move from (0, 0) to (0, 3) by three steps then come back to (0,0) then go to (2, 2) by following the 1 marked cells. To solve this, we will follow these steps − Define a function is_valid() . This will take x, y return true when x and y are in range of matrix, otherwise false Define a function min_cost() . This will take sx, sy heap := a heap with item (matrix[sx, sy], sx, sy) dists := a matrix of same size of given matrix and fill with inf dists[sx, sy] := matrix[sx, sy] while heap is not empty, do (cost, x, y) := first element of heap and delete first element from heap for each pair (nx, ny) in [(x, y - 1) ,(x + 1, y) ,(x - 1, y) ,(x, y + 1) ], do if is_valid(nx, ny) and matrix[nx, ny] + cost < dists[nx, ny] is non-zero, then edge := matrix[nx, ny] dists[nx, ny] := edge + cost insert (edge + cost, nx, ny) into heap return dists From the main method do the following − res := inf a := min_cost(row, col), b := min_cost(erow0, ecol0), c := min_cost(erow1, ecol1) for i in range 0 to row count of matrix, do for j in range 0 to column count of matrix, do res := minimum of res and (a[i, j] + b[i, j] + c[i, j] - 2 * matrix[i, j]) return res Let us see the following implementation to get better understanding − import heapq import math class Solution: def solve(self, matrix, row, col, erow0, ecol0, erow1, ecol1): def is_valid(x, y): return x >= 0 and y >= 0 and x < len(matrix) and y < len(matrix[0]) def min_cost(sx, sy): heap = [(matrix[sx][sy], sx, sy)] dists = [[math.inf] * len(matrix[0]) for _ in range(len(matrix))] dists[sx][sy] = matrix[sx][sy] while heap: cost, x, y = heapq.heappop(heap) for nx, ny in [(x, y - 1), (x + 1, y), (x - 1, y), (x, y + 1)]: if is_valid(nx, ny) and matrix[nx][ny] + cost < dists[nx][ny]: edge = matrix[nx][ny] dists[nx][ny] = edge + cost heapq.heappush(heap, (edge + cost, nx, ny)) return dists res = math.inf a, b, c = min_cost(row, col), min_cost(erow0, ecol0), min_cost(erow1, ecol1) for i in range(len(matrix)): for j in range(len(matrix[0])): res = min(res, a[i][j] + b[i][j] + c[i][j] - 2 * matrix[i][j]) return res ob = Solution() matrix = [ [1, 1, 1, 1, 1], [1, 10, 10, 10, 10], [1, 1, 1, 10, 10] ] row = 0 col = 0 erow0 = 0 ecol0 = 3 erow1 = 2 ecol1 = 2 print(ob.solve(matrix, row, col, erow0, ecol0, erow1, ecol1)) [ [1, 1, 1, 1, 1], [1, 10, 10, 10, 10], [1, 1, 1, 10, 10] ], 0, 0, 0, 3, 2, 2 8
https://www.tutorialspoint.com/program-to-find-minimum-cost-to-pick-up-gold-in-given-two-locations-in-python
CC-MAIN-2021-25
refinedweb
612
65.83
Howard Lewis Ship on Tapestry 5, Java and Clojure Recorded at: Bio Howard Lewis Ship is the creator of the Apache Tapestry project, and is a noted expert on Java framework design and developer productivity. He has over twenty years of full-time software development under his belt, with over ten years of Java. Howard is a frequent speaker at JavaOne, NoFluffJustStuff, ApacheCon and other conferences, and the author of "Tapestry in Action". Twitter: @hl am here with Howard Lewis Ship, creator of the Apache Tapestry Web framework. Howard, can you tell us a bit about what you have been working on recently? Sure. Obviously everything I do, pretty much, is Tapestry, so I have been working with various clients, supporting them with Tapestry, dealing with some performance issues at the high end, that type of thing. I’ve been working with other clients to develop Tapestry applications, and of course I’ve been getting Tapestry 5.3, the latest release, out the door. In fact the vote is going on right now - it's the Apache way, there is a vote. So that’s been a lot of fun; putting in a lot of new features improving Tapestry in a number of ways, improving performance, decreasing memory utilization, making it easier to extend and adapt. I am very excited about that. And I am really excited about what is going to be coming next. 2. So Tapestry is in a pretty crowded field amongst Java Web frameworks. What do you see as its key advantages? So, Tapestry has always been built with certain concerns built in from the ground up. And I call these simplicity, consistency, efficiency, and feedback. This has really come to the fore with Tapestry 5. Simplicity - it’s designed to be that the classes you write, the page classes, the component classes are all very bare bones, very simple, no inheritance, no interfaces to implement, just your bare POJO class. Consistency - things that work at the page level, work at the component level and vice versa, which makes it very reasonable to assemble very complicated functionality very easily, and Tapestry picks up all the details about wiring everything together. Efficiency - that is something that got renewed focus for 5.3. It’s very efficient; it doesn’t use reflection at runtime, it does a lot of work with bytecode, and a lot of hidden things, to optimize and really deliver terrific performance. And lastly feedback, which is one of the areas I think Tapestry does best against anybody - it’s just the emphasis Tapestry has on supporting the developer; telling them what’s going on. This is reflected by the really exceptionally useful exception report page, that really digs out a lot of information and helps you focus directly on what’s wrong, but also a lot of other things throughout the framework. For instance, Tapestry keeps a track of what operation it’s doing, so that if an exception occurs deep down it can tell you why it was doing that, not merely a stack trace. Something like: I am processing this event for this page; I am creating the page; I am creating the component for this page; I am parsing the template; and I hit something that I don’t understand. It gives you that really drill down view of what’s going on; something very missing. And other things, in terms of feedback, such as, any time you are choosing an item from a list in your code by its name, if you got the name wrong, it doesn’t just MPE it produces an exception report that includes a list of all the names that would have been valid; that type of thing. So there is an emphasis on that, start to finish, throughout the entire framework. 3. How much impact does Java EE6 have on Tapestry? So, Tapestry really is fundamentally built not to care where the data it uses comes from or goes to; just POJOs - objects with properties that can be read and updated. So if they are coming from J2EE or from Guice, or from a database, or via Hibernate, it doesn’t matter. It’s quite happy to pull that data in, allow components to edit them, push that data out. So although there's lots of changes in JEE and it’s getting simpler, with each release, those things are sort of orthogonal to Tapestry; it’s just a source and sink of data. 4. You've added support for the dependency injection standard, for instance, I think I’m right in saying, in 5.3, haven’t you? Yes. Tapestry has always had a really credible dependency injection container. So this is something that was built with Tapestry 5; it’s very powerful, it’s drawn ideas from Spring, it’s drawn ideas from Guice, and maybe influenced some ideas back at both of them. But it used its own annotations, and its own general approach. So it had an annotation @inject that was specific to Tapestry. What JSR303 provides is standard annotations that can be used by any of the major containers: Guice, Spring, Tapestry. All that was really necessary was to treat those alternate annotations, the javax version of inject, as the same as the Tapestry version of inject, and keep compatibility with both. The theory is that you can start writing code that really doesn’t care which container it’s injected into, and that’s probably a good thing. 5. You mentioned earlier that you were quite excited about what was coming beyond Tapestry 5.3, so can you give some insights into what you are thinking of? Oh sure. The main thing that Tapestry is lacking now is really revisit what’s going on on the client side, in terms of JavaScript. Back in ‘05, ‘06, when I was making some of the first work for Tapestry 5 and JavaScript, I sort of had a look at the web frameworks out there and I chose Prototype and script.aculo.us; you know, supported by Rails, well documented. Yes there was this up and coming thing called JQuery, but who knows where that’s going to go. Certainly the world has changed in the meantime, and Tapestry has got too wedded to Prototype and script.aculo.us, and too wedded to a bunch of things on the client side. So I am really looking for a new focus for Tapestry to make it really premier at delivering JavaScript to the client, but then getting out of the way. So that it doesn’t really matter whether you are building your stuff on JQuery or Prototype or any of the others, even things like Ext JS, or Sencha at this point, or MooTools. Whenever I say I want to support a framework, there is always somebody who comes up and says "What about this obscure one?" The point is, get away from a lot of what Tapestry does, make it an even more first class citizen, keep what it does today that’s good, but make it really easy to just get into the mix of things. And really revisit how forms and form input fields work. Because that’s really been the chief painpoint in Tapestry, is dealing with the idea that the server has a concept of what the state of the browser is, but with Ajax the state of the browser, what fields are present, what data is going to come up in the form submission, that’s very mutable now. It has been for years but increasingly every client is looking for some really significant behavior on the client side. And Tapestry's abstractions are a little leaky, and a little insufficient, and I am looking to target it in a different way, so that we'll start thinking about those things primarily as a client side issue, driven by perhaps Backbone or some equivalent JavaScript MVC client side framework. So that what really is going to move between the client side and the server side, is going to be JSON data. At the same time, I want to start making Tapestry look like a really excellent provisioning system for all those client side resources, whether it’s JavaScript, HTML, CSS; all those things are done really well by Tapestry and can be done even better. So that you basically can build your app in a natural way, and it’s going to scale right through the roof as your application becomes more popular, more data needs to move from your servers to your clients. 6. You said, I think it was about 2008, 2009 that you felt that the Java language was kind of done; that it should be frozen. I was interested in whether that was still your opinion or whether you are pleased to see things like Lambda being added to the language in Java 8? Yes, back then I remember, on expert panels, saying really freeze the language. We were still sort of in the after shocks of generics, and all the problems and pain that generics had given us. We were in an uncertain future; that was a little earlier than getting into the whole, "Who will buy Java? Oh it turns out it’s Oracle". The point is, at that time it really looked like Sun was not in a position to drive any kind of innovation, and there was tonnes of innovation going on at that time: Scala, Groovy, Clojure, JRuby. All the stuff people wanted was already available in these other languages. So I was thinking, "Yes, let’s freeze Java" - as if I have a voice - "Let’s freeze Java; let it be the language of contracts and interfaces, and let all the innovation happen everywhere else". But a lot of things have happened. Certainly Oracle has a better idea: they can keep their eyes on the prize, they can drive change to the language. And they put people in charge of this, people I really respect. I am thinking specifically Brian Goetz. So here is someone who really understands things front to back, from the metal all the way up. And now I am really seeing that the way they are driving the change for Lambda is going to be a good thing, because pretty much whatever is in Java is now the de facto mainstream. And the drive for Lambda that they are looking for, is really to introduce significant functional features into Java the language. And if that means lots of people are going to take functional seriously, as something they really need to learn and understand and embrace, yes that’s a great thing. 7. Are there other things you’d like to see added to Java? Let me see. I haven’t really thought about what I want to add to Java; mostly I think about what I want to remove from Java. Certainly Java has this huge legacy of fits and starts, ideas that seemed great at the time, but have turned into quite a bit of weight. Just I’d love to see the removal of so many deprecated classes and methods that get in the way. I’d like to see the Java language streamlined down; I’d like to see it reduced. Get rid of things like CORBA, all those CORBA support that really aren't used by the majority of people; make those things optional. I know project Jigsaw is all about this, but even so.... I would like to see Java be a first class citizen on the desktop. I think it’s so tragic that we are so many years into the life of Java, I mean pretty much 1995 up to now, and we are still reliant on shell scripts and Windows bat files to get our applications to launch. I mean, probably that’s the reason why it went onto the server; it was just easier to start up your apps if they were server side; never got the client side really figured out. And I still think there is room for Java to be a good language for developing tools that live on the desktop or at the command line. So there aren’t particular things I think are missing from the language per se. There are things I would love to see removed, like Checked Exceptions, and a lot of the type stuff I think gets in the way, but what I really would like to see is faster start up. It’s just so painful for me to launch a simple command and see Java, and maybe Groovy on top of that, launch up, eat fifty megabytes of RAM, start up twenty five threads, and by the time it’s printing out its first bit of text an equivalent tool in all of these supposedly weaker languages, like Ruby or Python, would be done. 8. You are a big fan of Clojure I understand. So could you give us a bit of an introduction to Clojure for people who don’t know what it is? Clojure is LISP reborn for the Java Virtual Machine. So Rich Hickey, who's the creator of the language, did something really brave. He said, "You know, I'm going to take about three years off; start working on my vision of what it should be to develop software today using the best of the newest thing in the world, which is the JVM HotSpot, the Java Virtual Machine, which is this incredible execution platform, except maybe for its startup time, and LISP which has this tremendous history". People talk about LISP as a language that wasn’t created so much as it was discovered, and it’s been around since the 60’s. I’m always amused when I’m watching episodes of Mad Men, I’m thinking, "Somewhere out there John McCarthy is figuring out how to make parentheses and S-expressions work". The point is, LISP has always been known as a language that had tremendous symbolic power. In fact it has always been a language that was as much a language toolkit as a language unto itself. It's a language for building the language that describes the solution to your problem. But it had its own set of problems; it had sort of this kitchen sink approach to dealing with all the different versions. So the common LISP became this incredibly complicated concept. It didn’t have the enterprise reach of the Java Virtual Machine; it didn’t have an environment that could run in so many different places. I mean I have a version of Clojure that runs on my Android phone. It’s something of a toy, but that’s pretty telling. So at the same time, Rich, who has a background in a lot of real time computation, was identifying some of the issues with doing that even in Java. The primary one being, as he says, "State: you’re doing it wrong". Mutable state is just the death of anything real time, because anything real time wants to be spread across many threads cooperating, and as soon as any of the data is mutable you are going to have a program whose behavior is unknown, or a program that runs slow or dead locks trying to deal with all the locking that’s necessary. So one of the main focuses of Clojure is this idea of immutable data; that every object or collection that you create in Clojure is created in its final state and can never change, except for a couple of special reference types that are allowed to change under very carefully controlled circumstances. And you’d think this would be like putting on heavyweight chains and locking yourself down, but it’s actually incredible liberating to know that you never have to worry about an object that you might want to share across threads ever being corrupted, ever being walked on, ever being changed in a way that makes it invalid. The other aspect is he has such good collections; they're very performant collections, the persistent data types, you know, vectors and maps and sets and lists, partly because they are very smart; if you introduce a change into one of them it doesn’t have to make a deep copy of the whole thing, it can copy just certain structural pieces from the old version into the new version. As your data types get larger, they’re not really consuming that much more space; you're not churning up the GC quite that badly. And these things all sort of work together in the language; every aspect supports every other aspect in a really cool way. One of the Alan Perlis epigrams is it’s better to have 100 operations on a single data type than to have ten operations on ten data types. And that’s really a major part of Clojure; that it has this consistent view of data. And the data is dumb; there is no place to attach a method to a Clojure map or a Clojure list; you always operate on the data using functions. But the end result is that you can compose things really nicely. So by comparison, Java seems like a complicated recipe where every bit of code you write has to do little increments of work and little fits and starts. And Clojure is like this flow of transformations where you say how to start from one piece of data and what you need to do to get to what you want at the end, and it takes care of a lot of the details. On top of that, the other aspect that is really intriguing, is how it really embraces the concept of lazy evaluation; something that’s common in the functional world, because mathematical functions always return the same value for the same inputs, and that frees you from the constraint of time. It means that whether you invoke a function now or next week or next year, you’re going to get the same results if you have the same parameters passed in. So the core of Clojure is all the major functions are lazy, which means that you never evaluate more than you need to. So you can create these elegant chains of transformations, but if you only need the first two or three or ten values from one side of the chain to the other, most of the work never actually happens. 9. Have you tried mixing and matching Java and Clojure code? I’m thinking it must be quite difficult to do because the two programming paradigms are really pretty different. Clojure's paradigms and Java's paradigms; obviously quite different. Java has the concept of classes and inheritance and Clojure, technically, just has functions. In terms of the interop, Clojure is really strongly designed to work well with its host platform, so it has excellent ability to call into Java code, whether you pass objects to it in some way or whether it’s using static methods to locate things. It has a really good syntax for making method invocations on Java objects look like function calls. So when you define a function in Clojure, it’s designed partly to interoperate with the Java world. So although it’s a Clojure function it’s still fundamentally an object, and that object implements not just in a Clojure function interface but Java interfaces such as Runnable, Callable and Comparable; so that, from the Clojure side, you might create a function and pass it directly in as a parameter into some Java side code. So that really is excellent; the interop between the two is great. And increasingly you can define very efficient types inside the Clojure world, this is defrecord and deftype, where you say, "Here is a batch of Clojure functions. Make them look just like a Java object with a particular interface," and then that object can again be released into the Java world to act as the bridge between the two. So really on both sides of the equations it’s very easy for Clojure to reach into the Java world and pass to the Java world the objects and values that it expects, but it is also very reasonable to do the flip side; have Java code go into the Clojure runtime, find Clojure namespaces and functions within those namespaces and invoke them. 10. Do you think the fact that Clojure is a LISP is in itself a barrier to wider adoption? The syntax is something a lot of people are simply never going to get through, and that’s really a shame, because it’s so pure. If you’ve ever studied compilers, which I did way back in college (I read the Dragon Book and wrote things like that), you know that one of the intermediate stages between program code and your ability to generate something - so source code and something you can turn into executable code - is the abstract syntax tree. One of the neat things about LISPs, and Clojure, is that you pretty much code the AST directly; that’s what all those parentheses are doing, they're providing all the grouping that would be an abstract syntax tree. So it’s as if you are doing the first step. But what it means is everything is so uniform; that’s that term you see floating around, homoiconicity, data and code look exactly the same. And that’s part of that concept that in Clojure you can create the language that solves your problem, inside of Clojure. And then ultimately, even when you are coding really Java code, there are so many conveniences and aspects of Clojure in it that you could actually write clearer, more maintainable, code with fewer parentheses from the Clojure side than you can from the Java side. 11. Have you looked at any of the web frameworks for Clojure? Yes, I’ve looked. In terms of web frameworks for Clojure, probably the main one is now Composure. Originally Composure was sort of a soup to nuts kind of approach, where it would do the interfacing to a servlet container such as Jetty on the one side, and all the way to templating on the other side to generate markup, along with dispatch in the middle. And those things kind of broke up. So now there is a small library called Ring and it’s the part, based on Ruby Sinatra I believe, of dealing with the Jetty or servlet container, and ultimately pulling out information - the path, the request parameters - and then delegating all the real application behavior down the road. And Composure split the other way as well, so its templating language, where you do the templating right in Clojure code, is called Hiccup, so that’s a separate piece; you’ve got to love all the names. And then Composure sits in the middle, binding those two pieces together, and also providing an easier way to generate these complicated ... not complicated, but generate these Ring routes, which are the way you map from an incoming request to a particular function. And, because of that design, it’s really easy to plug in alternatives, so there are several different templating options that you can just plug right into Composure. I’ve even been working on one, called Cascade, which tries to bring into the Clojure world some of the better ideas from Tapestry; the ones that fit. 12. Do you have any plans to use Clojure code within Tapestry? I actually have no plans to use Clojure code within Tapestry; just like I have no plans to use Groovy code in Tapestry or Scala code within Tapestry. Yes, there would be a lot of things that would code up much cleaner, or at least more concisely, if I would do those things. But I really have made a concerted effort to reduce or eliminate external dependencies from Tapestry and so, because of that, as tempting as it might be, it’s going to stay pure Java for the foreseeable future. Did I really say "COBRA" (not "CORBA")? by Howard Lewis Ship Re: Did I really say by raj n Hello stranger!You need to Register an InfoQ account or Login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think
http://www.infoq.com/interviews/howard-lewis-ship-tapestry5-clojure
CC-MAIN-2014-15
refinedweb
4,103
64.14
React Native ArcGIS MapView A basic port of ArcGIS for React Native. Handles basemap URLs, map recentering, Callout views (iOS only for now), drawing/moving/updating graphics onto the map, routing, and single tap listeners. Usage import ArcGISMapView from 'react-native-arcgis-mapview' ... render() { return( ... <ArcGISMapView ref={mapView => this.mapView = mapView} // your props here /> ) } ... Installation Instructions Install the package and link it yarn install react-native-arcgis-mapview or npm install react-native-arcgis-mapview Then run react-native link react-native-arcgis-mapview Modify your Android native project First off, make sure your minSdk version is 19 and your targetSdk is at least 28 (ArcGIS requires a minimum SDK level of 19). You can easily set these by modifying your Project build.gradle, within buildscript -> ext. Inside your Project Gradle file, inside of allProjects, and within the repositories tag, add the following: maven { url '' } Then, inside your App Gradle file, if your app does not already have Java 8 Support (ArcGIS Requires this from 100.4 onwards), add the following line inside the android bracket: compileOptions { sourceCompatibility 1.8 targetCompatibility 1.8 } That's it. Your project should build. Modify your iOS native project iOS is a bit trickier. Create a podfile in your iOS directory with the following contents: platform :ios, '11.0' target 'Example' do # Uncomment the next line if you're using Swift or would like to use dynamic frameworks # use_frameworks! rn_path = '../node_modules/react-native' #', ], :modular_headers => true #" pod 'RNArcGISMapView', :path => "../node_modules/react-native-arcgis-mapview/ios" If you already have a podfile, this is the key line you need to add: pod 'RNArcGISMapView', :path => "../node_modules/react-native-arcgis-mapview/ios" If you don't have swift implemented into your project, open your project directory and make a new Swift file. Name it whatever you want, it doesn't matter. Upon making this file, XCode should ask if you want to create a bridging header. If you plan on using Swift in your project, selecte 'Create Header'; otherwise, select no thanks. Then, open your project file and ensure your iOS Project targets iOS 11 or above and the Swift Language Version is set to 'Swift 4.2.' Clean, rebuild, and you should be good to go. License your map A license is not required to develop and test. However, to release your app, you must provide a license. See here for iOS or here for Android for more information on how to get a license. Once you have one, it's easy to add it into your project. In your App.js (or wherever you initialize your app) import { setLicenseKey } from 'react-native-arcgis-mapview'; export default class App extends Component { ... constructor(props){ setLicenseKey('your_key'); ... } ... } Props, Callbacks, and Method Calls Props Props In Depth Basemap URL Just follow these steps to get your basemap up and running. - To get a basemap URL, I suggest you visit this link to style a map to your liking. Once you're done, save your map (bottom left). - Now go to ArcGIS Online. Sign in, if necessary, and then click on 'My Content.' Click on your style and you should see a summary page. Next to the bright blue button that says 'Open in Map Viewer,' click on the navigation arrow and then click 'add to new map.' A map should show up with your style. - On the left, under the 'Contents' heading, click the three blue dots under your style and then click 'move to basemap.' If there are other items that are not your basemap, click the three blue dots next to them and click 'Remove.' Once your map style is the only one that's left, click the 'Save -> Save As' floppy icon on top, name your map, click save again, and return to your content. - Click on your map object. Click on Share and make sure you check 'Everyone' and then click 'Save.' Click 'Update' if necessary. Then click on the 'Settings' tab, scroll down to the Web Map Section, and then click 'Update Layers to HTTPS,' then 'Update Layers.' - Now copy the URL from the search bar (it should look like this:), remove anything after the # (including the #), change the http to https, and bingo, you got your basemap URL! Add that as the basemapUrl prop and you're good to go. It might take a moment to get up and running. Callbacks Methods The Point Object Above, the Point object was referenced as 'Point.' The Point object is structured as follows: point: { latitude: Number, longitude: Number, rotation: Number? = 0, referenceId: String, graphicId: String?, } The Image Object When defining graphics, use the following format: import { Image } from 'react-native'; ... pointGraphics: [ {graphicId: 'graphicId', graphic: Image.resolveAssetSource(require('path_to_your_local_image')) }, // Repeat for as many graphics as you'd like ] ... Example overlay object { pointGraphics: [ {graphicId: 'normalPoint', graphic: Image.resolveAssetSource(require('./src/normalpoint.png'))}, {graphicId: 'personPoint', graphic: Image.resolveAssetSource(require('./src/personpoint.png'))}, {graphicId: 'planePoint', graphic: Image.resolveAssetSource(require('./src/planepoint.png'))}, ], referenceId: 'graphicsOverlay', points: [{ latitude: 45.512230, longitude: -122.658722, rotation: 0, referenceId: 'Portland', graphicId: 'normalPoint', },{ latitude: 38.907192, longitude: -77.036873, rotation: 0, referenceId: 'Washington, D.C.', graphicId: 'personPoint', },{ latitude: 39.739235, longitude: -104.990250, rotation: 0, referenceId: 'movingImage', graphicId: 'planePoint', }, ] } Routing For routing to work, you must also pass in a routeUrl prop with a reference to a routing service. Check the Choosing a routing data source section of this Esri Article for information on how to make one. Once you have a routing URL, try calling routeGraphicsOverlay to see if your routing service has been configuired correctly. If it doesn't work, chances are your URL doesn't have the necessary permissions set. Make sure it has public access. Note that routing is an asynchronous task, and for longer routes, this may take a moment. Use the onRoutingStatusUpdate callback to create any UI you may need to inform the user that a route calculation is currently taking place. The biggest gotcha, however, is that routing uses up ArcGIS Online Credits. You are given 50 free credits a month; however, you must buy additional credits to continue routing. Make sure your ArcGIS Online account has sufficient credits before release.
https://reactnativeexample.com/a-basic-port-of-arcgis-for-react-native/
CC-MAIN-2021-17
refinedweb
1,021
67.04
API used to gain Chrome access is currently an experimental feature of the SDK, and may change in future releases. Using Chrome Authority The most powerful low-level modules are run with "chrome privileges", which gives them access to the infamous Components object, which grants unfettered access to the host system. From this, the module can do pretty much anything the browser is capable of. To obtain these privileges, the module must declare its intent with a statement like the following: var {Cc, Ci} = require("chrome"); The "chrome" built-in pseudo module is provided by the "toolkit/loader" module. The object returned by require("chrome"), when unpacked with the destructuring assignment feature available in the Mozilla JS environment, will provide the usual Components.* aliases: Cc An alias for Components.classes. Ci An alias for Components.interfaces. Cu An alias for Components.utils. Cr An alias for Components.results. Cm An alias for Components.manager. components An alias for Components itself (note the lower-case). From this you can access less-frequently-used properties like Components.stack and Components.isSuccessCode. Note: the require("chrome") statement is the only way to access chrome functionality and the Components API. The Components object should not be accessed from modules. The SDK tools will emit a warning if it sees module code which references Components directly. Your modules should refrain from using chrome privileges unless they are absolutely necessary. Chrome-authority-using modules must receive extra security review, and most bugs in these modules are security-critical. Manifest Generation The manifest is a list, included in the generated XPI, which specifies which modules have requested require() access to which other modules. It also records which modules have requested chrome access. This list is generated by scanning all included modules for require(XYZ) statements and recording the particular "XYZ" strings that they reference. When the manifest implementation is complete the runtime loader will actually prevent modules from require()ing modules that are not listed in the manifest. Likewise, it will prevent modules from getting chrome authority unless the manifest indicates that they have asked for it. This will ensure that reviewers see the same authority restrictions that are enforced upon the running code, increasing the effectiveness of the time spent reviewing the add-on. (until this work is complete, modules may be able to sneak around these restrictions). The manifest is built with a simple regexp-based scanner, not a full Javascript parser. Developers should stick to simple require statements, with a single static string, one per line of code. If the scanner fails to see a require entry, the manifest will not include that entry, and (once the implementation is complete) the runtime code will get an exception. For example, none of the following code will be matched by the manifest scanner, leading to exceptions at runtime, when the require() call is prohibited from importing the named modules: // all of these will fail! var xhr = require("x"+"hr"); var modname = "xpcom"; var xpcom = require(modname); var one = require("one"); var two = require("two"); The intention is that developers use require() statements for two purposes: to declare (to security reviewers) what sorts of powers the module wants to use, and to control how those powers are mapped into the module's local namespace. Their statements must therefore be clear and easy to parse. A future manifest format may move the declaration portion out to a separate file, to allow for more fine-grained expression of authority. Commands that build a manifest, like " jpm xpi" or " jpm run", will scan all included modules for use of Cc/ Ci aliases (or the expanded Components.classes forms). It will emit a warning if it sees the expanded forms, or if it sees a use of e.g. " Cc" without a corresponding entry in the require("chrome") statement. These warnings will serve to guide developers to use the correct pattern. All module developers should heed the warnings and correct their code until the warnings go away.
https://developer.mozilla.org/en-US/Add-ons/SDK/Tutorials/Chrome_authority
CC-MAIN-2017-13
refinedweb
666
54.12
URL: ActionScript function for turning a date into a time interval While working on a small Twitter utility, I wrote a function to turn a date into a time interval string. For instance, rather than formatting the date, it returns strings like "Just posted," "6 minutes ago," "4 hours ago," or "20 days ago". It's not a complex function, but I thought I'd post it here in case it might save someone a few minutes of coding. The less time we spend writing code someone else has already written, the more time we can spend innovating. - private function formatDate(d:Date):String - { - var now:Date = new Date(); - var diff:Number = (now.time - d.time) / 1000; // convert to seconds - if (diff < 60) // just posted - { - return "Just posted"; - } - else if (diff < 3600) // n minutes ago - { - return (Math.round(diff / 60) + " minutes ago"); - } - else if (diff < 86400) // n hours ago - { - return (Math.round(diff / 3600) + " hours ago"); - } - else // n days ago - { - return (Math.round(diff / 86400) + " days ago"); - } - } Report this snippet Tweet
http://snipplr.com/view/15561/turning-a-date-into-a-time-interval/
CC-MAIN-2015-06
refinedweb
171
70.84
Federal government of the United States From Uncyclopedia, the content-free encyclopedia - For the band featuring the same individuals, see U.S. Government: The Band. “If the U.S. Government decides to stick a tracking device up your ass; you say "Thank you." and "God bless America!"!” “Nothing stops odor terror like Febreeze™, the original Freedom™ fabric-refresher. Give your home a breath of fresh air!” “It depends on what the definition of Azteca™ brand tortilla shells is!” “Where's the funk?”." edit The Constitution. edit Structure And Function edit Executive Branch. edit Legislative Branch “... While the exact method of huffing is not specified, somebody said it was probably with kittens, though this has been challenged by the Orange Growers Union. Their opinion is discounted, though, as they tend to be crazy. [1] edit Judicial Branch All Americans, Ballyhoo! in 1824, "Row vs. The Board of Education," a black, homosexual student asserted her right to abort her unborn child in a public classroom. They determined the constitution called for the student to do it under a blanket, and thus the dilemma was solved. edit How they are screwing us over edit History down before revealing the truth. Satan decided to let him try again. He tried twice more, most recently in 2005 but so far nobody knows the truth because g-$ is sexy. edit List of People in the Government Who Know What They Are Doing edit References - ↑ Pelosi, Nancy (2006-11-09). After the Victory: The Democrats' Plans. Economist.com.
http://uncyclopedia.wikia.com/wiki/US_Government
crawl-003
refinedweb
251
69.18
What you'll get How you'll benefit Table of contents Sample pages To survive, a number of leather garment manufacturers in China are expanding their markets and diversifying product lines. China is said to be the largest volume manufacturer of leather garments in the world. It is difficult to obtain the exact number of leather garment makers in the country, but industry estimates reach up to 10,000. The majority of these suppliers are small companies, some of which have only 40 to 50 sewing machines at their workshops. The fickle nature of the fashion industry has hit makers hard. Demand for leather garments, particularly coats and jackets, peaked a few years back, encouraging many suppliers to join the line. But with fur now leading fashion forecasts this season, a number of these companies are experiencing dwindling sales. Low demand for leather coats and jackets is intensifying competition in an industry with a large supplier base. In 2005, the industry suffered an 8 percent drop in export volume sales. Exports have continued to fall, going down 21 percent in the first eight months of 2005 compared with the previous corresponding period. Apart from the large supplier base, Russia's strengthened crackdown on illegal tax and customs practices is another reason for China's dwindling exports. Most leather garment enterprises, particularly those in north China, export to Russia. Many of them ship to the country via Russian companies that charge foreign businessmen certain fees so that the latter do not need to pay high import taxes. This practice has been causing Russia large tax revenue losses. The majority of companies that used to focus on Russia are now expanding to the US and the EU, which they believe are more lucrative markets for their businesses. Exports to these markets, however, are also slowing, with consumers preferring fur to leather. Further, in 2006, the China government reduced the export rebate for finished leather from 13 percent to 8 percent and textiles from 13 percent to 11 percent. Companies engaged in low-value finished leather and textile products now have to speed up their technical innovation to boost their competitiveness. Because the industry is experiencing such a fluctuating business environment, China's leather garment exports are estimated to continue decreasing though 2007, by at least 10 percent. Suppliers are diversifying into other product lines in response. More than 50 percent of companies interviewed for this report now have a secondary line to leather garments. A number of these suppliers used to stop production during the off-season. Some of them even close their factories and go on vacation for up to a month. Some large makers have ventured into the manufacture of other product lines, including leather bags. For instance, Nanhai Yashan, one of the profiled suppliers, began producing leather bags in early 2006. It is now planning to focus on increasing capacity and output of the product in 2007.
https://www.apparelsearch.com/guides_directories/global_sources/reports/garments/leather_garments/industry_overview_sample_leather_garments.htm
CC-MAIN-2019-51
refinedweb
488
52.7
![if !(IE 9)]> <![endif]>. V501 There are identical sub-expressions '!memcmp("auto", charset_hint, 4)' to the left and to the right of the '||' operator. html.c 396 static enum entity_charset determine_charset(char *charset_hint TSRMLS_DC) { .... if ((len == 4) /* sizeof (none|auto|pass) */ && // <= (!memcmp("pass", charset_hint, 4) || !memcmp("auto", charset_hint, 4) || // <= !memcmp("auto", charset_hint, 4))) // <= { charset_hint = NULL; len = 0; } .... } The conditional expression contains a few calls of the 'memcmp' function with identical arguments. The comment /* sizeof (none|auto|pass) */ suggests that the "none" value should be passed into one of the functions. V605 Consider verifying the expression: shell_wrote > - 1. An unsigned value is compared to the number -1. php_cli.c 266 PHP_CLI_API size_t sapi_cli_single_write(....) { .... size_t shell_wrote; shell_wrote = cli_shell_callbacks.cli_shell_write(....); if (shell_wrote > -1) { // <= return shell_wrote; } .... } This comparison is an evident error. The value '-1' turns into the largest value of the 'size_t' type, so the condition will always be false, thus making the entire check absolutely meaningless. Perhaps the 'shell_wrote' variable used to be signed earlier but then refactoring was done and the programmer forgot about the specifics of operations over unsigned types. V547 Expression 'tmp_len >= 0' is always true. Unsigned type value is always >= 0. ftp_fopen_wrapper.c 639 static size_t php_ftp_dirstream_read(....) { size_t tmp_len; .... /* Trim off trailing whitespace characters */ tmp_len--; while (tmp_len >= 0 && // <= (ent->d_name[tmp_len] == '\n' || ent->d_name[tmp_len] == '\r' || ent->d_name[tmp_len] == '\t' || ent->d_name[tmp_len] == ' ')) { ent->d_name[tmp_len--] = '\0'; } .... } The 'size_t' type, being unsigned, allows one to index the maximum number of array items possible under the current application's bitness. The (tmp_len >= 0) check is incorrect. In the worst case, the decrement may cause an index overflow and addressing memory outside the array's boundaries. The code executing correctly is most probably thanks to additional conditions and correct input data; however, there is still the danger of a possible infinite loop or array overrun in this code. V555 The expression 'out_buf_size - ocnt > 0' will work as 'out_buf_size != ocnt'. filters.c 1702 static int strfilter_convert_append_bucket( { size_t out_buf_size; .... size_t ocnt, icnt, tcnt; .... if (out_buf_size - ocnt > 0) { // <= .... php_stream_bucket_append(buckets_out, new_bucket TSRMLS_CC); } else { pefree(out_buf, persistent); } .... } It may be that the 'else' branch executes more rarely than it should as the difference of unsigned numbers is almost always larger than zero. The only exception is when the operands are equal. Then the condition should be changed to a more informative version. V595 The 'function_name' pointer was utilized before it was verified against nullptr. Check lines: 4859, 4860. basic_functions.c 4859 static int user_shutdown_function_call(zval *zv TSRMLS_DC) { .... php_error(E_WARNING, "....", function_name->val); // <= if (function_name) { // <= STR_RELEASE(function_name); } .... } Checking a pointer after dereferencing always alerts me. If a real error occurs, the program may crash. Another similar issue: V597 The compiler could delete the 'memset' function call, which is used to flush 'final' buffer. The RtlSecureZeroMemory() function should be used to erase the private data. php_crypt_r.c 421 /* * MD5 password encryption. */ char* php_md5_crypt_r(const char *pw,const char *salt, char *out) { static char passwd[MD5_HASH_MAX_LEN], *p; unsigned char final[16]; .... /* Don't leave anything around in vm they could use. */ memset(final, 0, sizeof(final)); // <= return (passwd); } The 'final' array may contain private password information which is then cleared, but the call of the 'memset' function will be removed by the compiler. To learn more why it may happen and what is dangerous about it, see the article "Overwriting memory - why?" and the description of the V597 diagnostic. Other similar issues: Third-party libraries do make a large contribution to project development allowing one to reuse already implemented algorithms, but their quality should be checked as carefully as that of the basic project code. I will cite just a few examples from third-party libraries to meet the article's topic and simply muse over the question of our trust in third-party libraries. The PHP interpreter employs plenty of libraries, some of them slightly customized by the authors for their needs. V579 The sqlite3_result_blob function receives the pointer and its size as arguments. It is possibly a mistake. Inspect the third argument. sqlite3.c 82631 static void statInit(....) { Stat4Accum *p; .... sqlite3_result_blob(context, p, sizeof(p), stat4Destructor); .... } I guess the programmer wanted to get the size of the object, not the pointer. So it should have been sizeof(*p). V501 There are identical sub-expressions '(1 << ucp_gbL)' to the left and to the right of the '|' operator. pcre_tables.c 161 const pcre_uint32 PRIV(ucp_gbtable[]) = { (1<<ucp_gbLF), 0, 0, .... (1<<ucp_gbExtend)|(1<<ucp_gbSpacingMark)|(1<<ucp_gbL)| // <= (1<<ucp_gbL)|(1<<ucp_gbV)|(1<<ucp_gbLV)|(1<<ucp_gbLVT), // <= (1<<ucp_gbExtend)|(1<<ucp_gbSpacingMark)|(1<<ucp_gbV)| (1<<ucp_gbT), .... }; The expression calculating one array item contains the repeating (1<<ucp_gbL) statement. Judging by the code following this fragment, one of the ucp_gbL variables was meant to be named ucp_gbT, or it is just an unnecessary one. V595 The 'dbh' pointer was utilized before it was verified against nullptr. Check lines: 103, 110. pdo_dbh.c 103 PDO_API void pdo_handle_error(pdo_dbh_t *dbh, ....) { pdo_error_type *pdo_err = &dbh->error_code; // <= .... if (dbh == NULL || dbh->error_mode == PDO_ERRMODE_SILENT) { return; } .... } In this fragment, in the very beginning of the function, a received pointer is dereferenced and then is checked for being null. V519 The '* code' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 100, 101. encoding.c 101 protected int file_encoding(...., const char **code, ....) { if (looks_ascii(buf, nbytes, *ubuf, ulen)) { .... } else if (looks_utf8_with_BOM(buf, nbytes, *ubuf, ulen) > 0) { DPRINTF(("utf8/bom %" SIZE_T_FORMAT "u\n", *ulen)); *code = "UTF-8 Unicode (with BOM)"; *code_mime = "utf-8"; } else if (file_looks_utf8(buf, nbytes, *ubuf, ulen) > 1) { DPRINTF(("utf8 %" SIZE_T_FORMAT "u\n", *ulen)); *code = "UTF-8 Unicode (with BOM)"; // <= *code = "UTF-8 Unicode"; // <= *code_mime = "utf-8"; } else if (....) { .... } } The character set was twice written into the variable. One of these statements is redundant and may cause incorrect program behavior somewhere later. Despite that PHP has existed for a long time already and is pretty famous, there are still a few suspicious fragments to be found in its basic code and the third-party libraries it employs, although a project like that is very likely to be checked by various analyzers. Using static analysis regularly will help you save much time you can spend on solving more useful ...
https://www.viva64.com/en/b/0277/
CC-MAIN-2018-17
refinedweb
1,022
51.04
This example will show how to use sound effects such as echo, reverb and distortion. irrKlang.NET supports the effects Chorus, Compressor, Distortion, Echo, Flanger Gargle, 3DL2Reverb, ParamEq and WavesReverb. This example will show how to use sound effects such as echo, reverb and distortion. irrKlang.NET supports the effects Chorus, Compressor, Distortion, Echo, Flanger Gargle, 3DL2Reverb, ParamEq and WavesReverb. At the beginning, we simply create a class and add the namespace IrrKlang in which all sound engine methods are located. using System; using IrrKlang; namespace CSharp._05.SoundEffects { class Class1 { [STAThread] static void Main(string[] args) { Now let's start with the irrKlang 3D sound engine example 05, demonstrating sound effects. Simply startup the engine with default options/parameters. // start the sound engine with default parameters ISoundEngine engine = new ISoundEngine(); Now we play a .xm file as music here. Note that the last parameter named 'enableSoundEffects' has been set to 'true' here. If this is not done, sound effects cannot be used with this sound. After this, we print some help text and start a loop which reads user keyboard input. ISound music = engine.Play2D("../../media/MF-W-90.XM", false, false, StreamMode.AutoDetect, true); // Print some help text and start the display loop Console.Out.Write("\nSound effects example. Keys:\n"); Console.Out.Write("\nESCAPE: quit\n"); Console.Out.Write("w: enable/disable waves reverb\n"); Console.Out.Write("d: enable/disable distortion\n"); Console.Out.Write("e: enable/disable echo\n"); Console.Out.Write("a: disable all effects\n"); while(true) // endless loop until user exits { int key = _getch(); // Handle user input: Every time the user presses a key in the console, // play a random sound or exit the application if he pressed ESCAPE. if (key == 27) break; // user pressed ESCAPE key else { We get the ISoundEffectControl interface, but this only exists if the sound driver supports sound effects and if the sound was started setting the 'enableSoundeffects' flag to 'true' as we did above. ISoundEffectControl fx = null; if (music != null) fx = music.SoundEffectControl; if (fx == null) { // some sound devices do not support sound effects. Console.Out.Write("This device or sound does not support sound effects.\n"); continue; } Now we disable or enable the sound effects of the music depending on what key the user pressed. Note that every enableXXXSoundEffect() method also accepts a lot of parameters, so it is easily possible to influence the details of the effect. If the sound effect is already active, it is also possible to simply call the enableXXXSoundEffect() method again to just change the effect parameters, although we aren't doing this here. if (key < 'a') // make key lower key += 'a' - 'A'; switch(key) { case 'd': if (fx.IsDistortionSoundEffectEnabled) fx.DisableDistortionSoundEffect(); else fx.EnableDistortionSoundEffect(); break; case 'e': if (fx.IsEchoSoundEffectEnabled) fx.DisableEchoSoundEffect(); else fx.EnableEchoSoundEffect(); break; case 'w': if (fx.IsWavesReverbSoundEffectEnabled) fx.DisableWavesReverbSoundEffect(); else fx.EnableWavesReverbSoundEffect(); break; case 'a': fx.DisableAllEffects(); break; } } } } The program is now nearly finished, we just need some functions for reading from the console, but this doesn't have anything. Download tutorial source and binary (included in the SDK)
https://ambiera.com/irrklang/tutorial-sound-effects-csharp.html
CC-MAIN-2022-21
refinedweb
514
51.24
I am trying to get a Python script to output the running config of my F5, to show everything from authentication to pools to ntp settings. With TMSH/SSH it's so easy doing 'show running-config' and then saving that output. Is there anything similar to this that I can do with the Python F5-SDK? I am trying to generate this in a cleaner fashion than 'show running-config' does it as I use this for PCI compliance evidence. Any help would be appreciated. To muddy the waters a bit I am EXTREMELY new to F5 products. I was able to get the running config to show with Python, please see the code below: from f5.bigip import ManagementRoot mgmt = ManagementRoot("your_f5_ip", 'username', 'password') x = mgmt.tm.util.bash.exec_cmd('run', utilCmdArgs='-c "tmsh show running-config"') print(x.commandResult) tmsh show running-config is currently not supported in the iControl REST framework. You can call the /mgmt/tm/util/bash endpoint as a workaround. A curl example is shown below: tmsh show running-config /mgmt/tm/util/bash curl -sku <user>:<pass> https://<mgmtIP>/mgmt/tm/util/bash -X POST \ -d '{"command":"run", "utilCmdArgs":"-c \"tmsh show running-config\""}' The output from the tmsh command is shown in the commandResult field. All the lines are concatenated by translating LF (0x0A) to literal \n. In bash, you may pipe the output to something like this: commandResult | python -m json.tool | grep commandResult | sed 's/\\n/\n/g' Of course, you can do better using Python. See also Native tmsh/bash commands via REST API. It aint pretty , but it should do the trick, one possible improvment would be implementing list with all commands you need and for loop import paramiko,time device_1 = '172.16.1.53' devices = [device_1] username = 'cisco' password = 'cisco' ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) for device in devices: ssh.connect(device, username=username, password=password,allow_agent=False,look_for_keys=False) file_template = 'output.{}.txt'.format(device) with open(file_template, 'wb') as file_handler: channel = ssh.invoke_shell() channel.send('term length 0\n') time.sleep(2) out = channel.recv(9999) channel.send('show running-config\n') time.sleep(15) out = channel.recv(9999) file_handler.write(out) file_handler.close() ssh.close() I was curious how I could run commands without the F5-sdk, thanks for the share, Ill be using paramiko at some point in the future! If you want to do bunch of "show's" on your F5 devices this will do the trick,sadly new library netmiko (from Kirk Byers) doesnt provide much methods for F5 , but if you would like to do some configuration based on templates i strongly suggest using terraform from Hashicorp (LTM).
https://devcentral.f5.com/questions/python-show-running-config-62425?tag=F5+Python+SDK%3Bdevops
CC-MAIN-2019-13
refinedweb
450
57.57
An extension for IPython that help to run AsyncIO code in your interactive session. Project description An extension for IPython that help to run AsyncIO code in your interactive session. Installation Install asyncio-ipython-magic using pip: $ pip install asyncio-ipython-magic …or directly from the repository using the %install_ext magic command: $ In[1]: %install_ext Enjoy! Usage In [1]: %load_ext asynciomagic In [2]: import asyncio In [3]: import time In [4]: async def foo(): ...: i = 0 ...: while i < 3: ...: print('time =', time.time()) ...: i += 1 ...: await asyncio.sleep(2) ...: In [5]: %%async_ ...: await foo() ...: time = 1478985421.307329 time = 1478985423.309606 time = 1478985425.31514 In [6]: %await_ foo() time = 1487097377.700184 time = 1487097379.705614 time = 1487097381.707186 In [7]: Testing It just works, I hope. License asyncio-ipython-magic is licensed under the MIT license. See the license file for details. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/asyncio-ipython-magic/
CC-MAIN-2021-49
refinedweb
169
70.39
User:Monika/explainsshit From Uncyclopedia, the content-free encyclopedia WARNING: This page is 84 kilobytes long; some browsers may have problems editing pages approaching or longer than 32kb. But honestly, why the fuck would anyone else edit this page? edit I am going to use this space to explain all the jokes in my articles, thus rendering them unfunny. Don't read this unless you are retarded and don't get jokes. edit Japan-France This article takes Japanese stereotypes and French stereotypes and combines them. I cannot take all the credit for the jokes on this page - most of them came from a brain-storming session with friends and I just put them down. - Japan-France is a small island nation in Eurasia. Japan is in Asia and is a small island nation. France is in Europe. Japan-France would then be a small island nation in Europe-Asia. Coincidentally, Europe and Asia aren't really two separate continents, and combined, they are actually called Eurasia. This is funny because Japan-France is not a real country, but Eurasia is a real continent. Flag of Japan-France. The Rising Sun over a blue ocean as seen while napping on a beach This is the old Naval flag of Japan (the red and white parts) with added color to make it the same color as the French flag (red, white, blue.) The Rising Sun is the real name of the Japanese naval flag, and napping on a beach is a real French pasttime. - Frapan The local name for Japan-France comes from combining the words France and Japan in that order. - Motto: 銃士値 This translates to "Musketeer Values", a twist on the phrase "Samurai Values" from Japan, and Musketeers from France, since France has Musketeers and not Samurai. Map of Japan-France This map is an outline of Japan overlayed with an outline of France. - メガパリ (Megaparis) and メガベルサイユ (Megaversailles) Paris and Versailles are real cities in France. Putting "Mega" in front of them and writing that in Japanese makes it a joke about France and Japan at the same time. - Jançais "Français" is French for "French". "Jançais" is "Français" replacing the "Fra" with "Ja". - Japan-Franco culture is one of extreme ostentation. Every citizen holds a place in a complex social hierarchy. There is a very strict system of etiquette dictating exactly how rude you must be to members of each social class. Japanese Culture is really one of extreme ostenation, and every citizen holds a place in a complex social heirarchy. And there is a very strict system of etiquette dictating exactly how polite you must be to members of each social class. In France, people are rude instead of polite. - As often pointed out by xenophobic gaijin, This part is a joke about Japan only. - the women of Japan-France do not shave their armpit hair or their legs. This part is a joke about France only. - However, this is really okay, because they are naturally hairless in these areas. This part is a joke about Japan only. - The proper offensive stereotype is that they do not bathe very often, This part is a joke about France only. - and when they do, they do so in public. This part is a joke about Japan only. - Japan-France consumes nearly 50% of the world's cigarettes, even though they only make up around 2% of it's population. Japanese people smoke a lot, and French people smoke a lot. So, Japan-French people smoke even more. - The indigenous people of Japan-France are thought to have been a benevolent race of pacifists who had discovered the principles of thermodynamics far before their European overlords. They have since been systematically enslaved, poxed, and exterminated to the point of extinction. The interbred Island-Japano's or Island-Franco's are treated as second class citizens and can rarely find work outside of manual labor in the grape/rice fields or as servants. Also, porn stars, but only in the really obscure stuff. And that's saying something, because Japan-France has some really messed up porn. Jançais Kana: The alphabet of the language of Japan-France, along with associated pronunciations. (Kana with no englangi following are silent) This is really Japanese Kana with some of the pronunciations erased. - Jançais is a syllabic language where most of the syllables are silent. This has led to two schools of thought on how haiku should be written. Japanese is a syllabic language. French is a language with a lot of silent syllables. Haiku, a form of poem with strict but simple rules on how many syllables should be in each line, originated in Japan. The presence of "silent syllables" in a language would confuse the notion, leading to two possible interpretations. - Japan-France is the home of the largest and most kawaii art museum in the world, the Rurufu. Ceiling of the Rotunda of Apollo, in the Rurufu Museum, in Megaparis. This is really the Ceiling of the real Louvre with some Japanese art replacing the original French art. The smaller image is Sgt. Keroro building a Gundam model. Thus, it is meta-Japanese-art. - The country is also the world's largest producer of ridiculously pretentious films. Both countries, but France more so, are known for pretentious films. - One of its most famous directors is Kurosawa Lukube-san. The "san" at the end of the name is an honorific. So his name is really Mr. Kurosawa Lukube. Mr. Kurosawa Lukube is a combination of Akira Kurosawa (pretentious filmmaker of Japan), and Luc Besson (pretentious filmmaker of France.) - His most well-known film outside of Japan-France is "The Fifth Samurai", in which a poor village must recruit five Samurai to protect the village from an alien invasion, only the fifth Samurai is a naked confused chick named Reeroo, played by a naked confused Jobobichu Mirachan. "The Fifth Samurai is a combination of "The Fifth Element" by Besson, and "The Seven Samurai" by Kurosawa. In "The Seven Samurai", a poor village must recruit seven Samurai to protect them. In "The Fifth Element", some people need to protect Earth from an alien invasion by gathering the five elements. The first four elements are earth, wind, water, and fire, and the fifth element is a naked confused chick named Leeloo played by Milla Jovovich. Japanese people replace ls with rs and vs with bs, and reverse the order of names. "Chan" is another honorific, so her real name is Little Miss Milla Jovovich. - (In 2005, Gonzo remade this as a Franime in which 4 of the samurai were given giant robots, and Reeroo was given giant hooters.) Gonzo really remade The Seven Samurai as anime with giant robots. If Milla were in it, they would have given her giant hooters too. Gankutsuou, their remake of The Count of Monte Cristo with giant robots, is good. You should watch it. I can't say the same for Samurai 7. Because I haven't watched it. But also because I assume it's terrible. - Other works include Kamikaze Kaitou Jeanne and The Baka. KKJ is an anime about a magical girl who is the reincarnation of Joan of Arc. Besson has made a film on Joan of Arc. "The Baka" translates to "The Idiot", which translates back into "Hakuchi", which is a Kurosawa film. - Franime, or Japan-French animation, had its roots with Herge's たぬたぬ, a popular comic about a flying atomic robot boy reporter. たぬたぬ is a combination of the French comic Tintin, about a boy reporter, and the Japanese anime Astroboy, about a flying atomic boy. - The most well known Franime in America is DragonGaul Z*, the adventures of a warrior named Asterix as he travels the Roman empire in search of the seven mysterious dragon balls created by the wizard Getafix-sensei. DragonGaul Z* is a combination of the French comic Asterix, about a Gaul warrior named Asterix who travels the Roman empire and drinks a mysterious potion created by the wizard Getafix, and the Japanese anime Dragon Ball Z, about a bunch of guys going around in search of the seven mysterious dragon balls. "Sensei" is another honorific, so his real name is Professor Getafix. - There are two opposing forces in J. Gothic Lolita is a real fashion trend in Japan. Here, it is explained in decidedly French terms. - Kogal, the opposing trend, involves tanning yourself beyond belief and dressing like a hideous slut trying to regain her youth. It is rather popular among smokers and beach-dwellers who cannot pull off "young and pale" any more and need some trend to latch on to. Kogal is a real fashion trend in Japan. Here, it is explained in decidedly French terms. - It is said Monet really liked this style, and that Girl with Watering Can was a fluke. (And it would be, seeing as Renoir was the artist.) Girl with a Watering Can is a painting by Renoir, and the girl featured looks retroactively Gothic Lolita. Monet really didn't draw it. Renoir did. They are French. - Japan-France has never won a war, preferring to commit Seppuku immediately rather than risk any dishonor. Seppuku to avoid dishonor is a Japanese stereotype based in fact. France never winning wars is a French stereotype based in fact. Combining them gets us a joke about Japan-France. -. Japan was once a monarchy centered around the Sun Goddess Amaterasu. "Du Soliel" is French. France once revolted against the oppressive rule of its monarch, and guillotined him. - Three years later, during Japan-French Revolution II, both sides were exterminated by their own weapons. The usual custom for this type of battle is that opposing rulers gut themselves seppuku style and the one who dies quickest is the winner. All soldiers in attendance then commit suicide in the same manner as their respective leader. This is a repeat of two jokes ago. - 1945 saw this strategy reborn in the modern age. Five minutes after finally entering World War II, Japan-France began crashing planes into its flagship, the AG-Fubuki. The other side got confused and went home. This is a repeat of three jokes ago reborn for the modern age. In WWII, Japan really crashed planes into enemy ships, and France really surrendered. The Fubuki was a Japanese warship, and the "AG" stand for "Arcade Gamer". - Japan-France has a standing army of 12. Understandably, very few people would want to join an army that keeps killing itself. - One of those is the penguin mascot 戦友たん Penguin mascots are common in Japan. This one's name is "War Buddy-tan". "Tan" is an honorific, so his real name is "li'l missy War Buddy." - who is way cuter than Nils Olav, the penguin mascot of Norway's army. Actually, Nils Olav is the commander-in-chief of Norway's army. Really. And a penguin. edit Mr. SandmanThis article in its current form was inspired by the joke on the Neil Gaiman page linking to it. On that page, they rename "The Sandman" to "Mr. Sandman" and make references to the song from the 50s. Here, I take the series and reframe it as a comic book about teens from the 50s. Most of the jokes are Sandman-as-50s-teen jokes, and visually, there are also Archie (and Josie and the Pussycats, and Sabrina the Teenage Witch) gags. Besides the general stereotypes of 50s teenagers (the big football game, vacations at the beach, and prom), the only Archie gag that made it into the text is the inclusion of the character "Jughead" in the list of characters. A page from the critically acclaimed Mr. Sandman. In this volume, Dream creates The Corinthian as a girlfriend for his friend Jughead. She turns out to be a "collector", which is 50s teenage code for "girl with a lot of boyfriends". Sandman is really drawn pale with black holes with blue centers for eyes. He also always talks in black squiggly speech bubbles. However, his hair is rarely this neat. The Corinthian is also a real character from the Sandman series, only he is a male, and the mouths where his eyes are are full of much pointier teeth. He is really a "collector", which is serial killer slang for "serial killer". An observant reader might catch a reference to the anime show "Urusei Yatsura" in the first panel, refering to Jughead's predilection for girls who wear animal print bikinis, like the one Lum wears, or the ones worn by the girls in Josie and the Pussycats. - Mr. Sandman is a comic book series by Neil Gaiman and is not to be taken seriously. From the refering Neil Gaiman page. - In publication since 1952, Mr. Sandman chronicles the day to day life of Dream, the anthropomorphic personification of the concept of dreams. This moves the publication date from the 90s to the 50s. It also implies that the comic is still going, which almost counts as an Archie gag. The rest of the statement is true about Sandman, but goes here on this joke page because it is necessary to frame the rest of the article. (I am personally of the belief that lies are funnier the closer they are to the truth, and this is almost certainly reflected in my writing.) - Dream, though immortal and existing since the beginning of time, is your standard everyday teenager from the 50s. Basic joke setup to make it clear to people who are reading this what is going on. I was once given this advice: "Say what you're going to say, say it, and then say what you said." It was advice on how to present scientific papers at scientific conferences if you're a scientist, but I think it applies to long-form jokes as well. - He enjoys such activities as going on dates with Calliope and Nada (rivals for his affection) In Sandman, Calliope and Nada are two girls Dream has dated in the past. He sent Nada to hell for eternity because she didn't want to be his queen, and Calliope got herself imprisoned of her own accord by a series of writers. Dream eventually (when he finally got around to it) helps set them both free. - meeting up with his old friend Jughead (a mortal granted everlasting life) at the same soda jerk every 100 years, The real character's name is Hob Gadling. But Jughead isn't a stretch. The soda jerk is really a bar. - and bringing into existence desirable new teenagers at the request of existing ones. A reference to the song Mr. Sandman. - The Endless are a family of seven immortal siblings, each anthropomorphic personifications of his or her own unique universal concept. Dream is the third oldest of the Endless. 100% true, but useful information for getting the joke, and also for making the joke read like a proper article on the subject matter. - Destiny is the oldest of the Endless, in the sense that he existed since the beginning of time first. He is the most responsible of the Endless, and is always watching out for his younger siblings, True. - but he is also a prankster and will pick on them given the chance. Not true, but makes him more believable as a character in a comic book about teens from the 50s. - He carries around with him a book that lists everything that will ever happen ever, and uses it to play tricks on his siblings and their friends. Destiny is blind, the result of an amusing incident involving Destruction and a football. The Endless, Dream's siblings. Left to right from top: Destiny, Death, Destruction, Desire, Despair, Delirium. (Yeah, their parents thought it was funny.) Image 1: Destiny - A non-edit of Alexander from Josie and the Pussycats. He makes a good Destiny because he looks older than the other Endless, wears sunglasses so he looks blind, and has his right arm hidden so I don't need to edit in a book. Image 2: Death - A paleification of Veronica from Archie. She looks like a serious older sister who could be popular and intelligent here. She usually doesn't. Anyway, for that one frame I used, she makes a good Death. Image 3: Destruction - A non-edit of Archie, falling over. It didn't need an edit. This is what Destruction would really look like if he were in a 50s teenage comic book. Image 4: Desire - An ever-so-slight paleification of Alexandra of Josie and the Pussycats. Doesn't she look like a dickgirl? Image 5: Despair - Perhaps my strangest choice, Sabrina from Sabrina the Teenage Witch. Lots of color correction on hair, skin, and eyes. She really should be much fatter and nakeder. But I liked the idea of someone who looks like this being Despair. She does look a little like she's about to snap. Image 6: Delirium - Josie from Josie and the Pussycats, with color edits and FISHNETS! I love her expression. -. Death in Sandman is a nice and personable girl, but she would hardly fit in a comic about 50s teens. She dresses in all black, like a subdued goth. Simple tank tops and black jeans. She also has huge hair, but not huge in the 50s way (see image of Despair), huge in the late 80s way. Her Mr. Sandman rewrite is quite different from the original visually, but not terribly much so in spirit. She is in fact the Grim Reaper, and her best friend is. Totally true in every way, only the real Neil Gaiman would make it sound more emo. -. The real Destruction left his realm because he didn't think humans needed any more help destructing things. He is in fact a redhead who fancies himself an artist and used to have a beard but shaved it off. He lives with his talking dog. - Desire is a dickgirl. Rather, Desire can't pick a sex and switches back and forth. Also, am I the only one surprised that Uncyclopedia doesn't have a dickgirl article? - Despair is a fatty who wishes she were a dickgirl. True. -. Totally true. I might get tagged for too much true. - Calliope is a perky young blond muse who goes to school with Dream. She is the mother of Dream's son, Orpheus, who lived in Greece a long time ago. I envision Calliope in Mr. Sandman as looking like Betty and being really perky but other than that being exactly like the Sandman character. - Nada is a Nubian princess and a wealthy socialite. She's best friends with Calliope, but all bets are off when they're fighting for Dream. Nada here is a mix of Veronica and Valerie from Josie and the Pussycats. As I mentioned before, Dream really sent her to hell for rejecting him (He's so emo!) but imagine her buying expensive clothes and storming off in a hissy fit when Calliope gets dates with Dream. - The Corinthian is one of Dream's creations. She likes to dress in bikinis, and has mouths where her eyes should be. She is very attractive and will eat people. Especially the eyes. She likes eyes the most. - Jughead is Dream's dorky friend. He is dating The Corinthian. In the original Sandman, Jughead... Jughead is a character from Archie who has a funny hat and nose. - Luci is a blond prettyboy and a recent graduage from a rival highschool. Because he was the captain of the crew team, and the crew team has ridiculously early morning practices, he is nicknamed Lucifer, The Morning Star. He now owns the diner "Hell's Milkshakes" and the spinoff nightclub "Hell's Milkshakers". Luci is Lucifer, The Morning Star in Sandman. You know, ruler of hell and all. He is in fact blond and pretty. - Bast is the lead singer of the band Bast and the Pussycats. She is the Egyptian goddess of pussycats. Bast is a friend of Dream's. She in turn is based on Bastet, the real Egyptian Goddess of cats. -. Cain and Abel, based on the biblical characters of the same names, are two dreams who live in Dream's realm. Abel really stutters, and Cain kills him all the time. (aside: WTF. Did I change anything for this article?) - . This has two inspirations. One is the amusing idea that Neil Gaiman is a dorky fanfic writer. The second, the idea that a writer would introduce a new distinct character by the same name in every story and then stop and make it one character, comes from Lupin III, where Monkey Punch originally started the character of Fujiko that way. Bonus trivia: Fujiko's boobs are much larger than "Neil Gaiman"'s. - Batman, aka The Goddamn Batman saves the day whenever the Endless get in over their heads. Batman is in fact a character in Sandman. He appears for one panel in the first volume. The original pitch for Sandman involved a lot of regulars in the DC-verse, but Neil Gaiman drifted away from this and included them less and less as the story went on. "The Goddamn Batman" is an actual Batman quote. The entire sequence goes guy in car with Batman: Who the hell are you anyway, giving out orders like this? Batman: What are you DENSE? Are you RETARDED or something? Who the hell do you THINK I am? I'm the Goddamn BATMAN. - 1: Preludes and Nocturnes - A cult tries to capture Death and instead captures Dream. Not knowing what to do with him, they lock him in their basement for over 50 years. When he escapes, he discovers that most of his stuff has been stolen - most importantly, his funny gas mask thing, a bag of sand, and a jewel he put most of his powers in. Volume 1 is mostly about him getting these things back. In chapter 8, "The Sound of Her Wings", we're introduced to Death, who was worried sick about him. Several dreams did escape and are causing problems, but that's mostly dealt with in Volume 2. - Volume 2: A Doll's House After returning to school, Dream meets the new girl Rose, who is a dream vertex and a total doll. He attempts to court her, and unwittingly attracts the affection of her grandmother, Unity. Volume 2: The Doll's House - Rose is the new girl, and a dream vortex, and a total doll. He tries to kill her but ends up killing her grandmother Unity instead. Something about Unity being the original vortex. Go read the book. - Volume 3: Dream Beach Over summer vacation, Dream reminisces about the summer years ago he and Willy Shakespeare would write and perform plays at the beach to amuse girls. Volume 3: Dream Country - Among other things, this includes the chapter A Midsummer Night's Dream, about Dream commissioning the title play by Shakespeare and inviting all his friends to watch it. - Volume 4: Season of Misses Out of a sense of duty and friendship, Dream sets out to rescue Nada from the clutches of Luci, owner of "Hell's Milkshakes", a popular diner among creeps and demons. Calliope, of course, gets jealous. Volume 4: Season of Mists - Replace "Hell's Milkshakes" with "Hell". Calliope was rescued in Volume 3. - 5: A Game of You - In fact about the character Barbie. But the joke from this chapter comes mostly from the play on "game" to "big game", something teenagers in the 50s were always really concerned about. - Volume 6: Fables and Recess Dream volunteers at a local elementary school, watching over kids during their breaks. When they discover his powers, they encourage him to indulge their whims - and they do have quite the imagination. Volume 6: Fables and Reflections - A series of short stories. Again, the joke is a title pun expansion. - 7: Brief Lives - Really about Dream and Delirium heading out to find Destruction. Of course, now that Dream is a teen from the 50s, he has to go on road trips with his new license. - Volume 8: Schoolyear's End On the last day of school, Dream and his friends get blocked from leaving by a severe hurricane warning. They sit around in the school cafeteria and tell stories to each other. Volume 8: Worlds' End - A bunch of travelers get caught in a "reality storm" (WTF) and sit around in a hotel bar 9: The Kindly Ones - The part about Lucifer giving Hell to Dream really happened in Volume 4, but what the hell. The rest is made up. I'm not going to tell you what actually happens in Volume 9. - Volume 10: The Prom Death, a shoe in for Prom Queen, gets sick in the weeks leading up to Prom. Then she gets better, and Dream catches what she had. The afterparty involves lots of toasts to the sick friend. Volume 10: The Wake - Not going to tell you what happenes here either, other than that it does involve a lot of toasts to Dream. - Volume 11: Sunny Days and Endless Nights The gang goes to the beach for summer vacation again. Each chapter tells the story of a different Endless on the last night of the Summer Carnival. Volume 11: Endless Nights - A story per Endless, oddly enough not set at the Summer Carnival. -. Plots made up to match titles based on "Death: The High Cost of Living" and "Death: The Time of Your Life". -. Included this for completeness. The real title is "The Little Endless Storybook". edit Super Smash Bros: Xtreme Beach Volleyball Okay, I'm going to explain this one a little differently. Cover Art for SSBX DOAX. It features Princess Peach (in the Cyclamen suit), Samus (in the Phoenix), and Jigglypuff (in the Lamina) Hitomi in the, shit, I think it's the Leo? Tina in the Jade? And Lisa in, fuck, I don't even remember. I'll fact check later., three of the characters from the game. note: I can't be fucked to change the image. Super Smash Bros. Dead or Alive Xtreme Beach Volleyball is a spinoff game to the popular Super Smash Bros. Dead or Alive fighting game series. SSBX DOAX, unlike the rest of the series, is not a fighting game, but rather a "Sports Fantasy Simulation" game, a cross between a dating sim and a somewhat Pong-like volleyball simulator. The events of SSBX DOAX take place between Super Smash Bros. Melee DOA3 and Super Smash Bros. Brawl DOA4, and are considered canon to the overarching storyline. Story Using his winnings from the SSBM DOA3 tournament, Mr. Game&Watch Zack heads off to Vegas for some gambling. He hits the jackpot playing slots roulette and wins a rather large sum of money. Using this money, he procures himself an island and names it Mr. Game&Watch Zack Island. He then sends out invitations stating that the next SSB DOA tournament, SSBB DOA4, will be held on Mr. Game&Watch Zack Zelda Kasumi. note: I am not going to do this twice, so it's up to you to determine what of what gets crossed out applies to Super Smash Bros. Hint: just about all of it. Volleyball 300 some Helena A young princess Opera Singer and heir to the Mushroom Kingdom and SSBTec DOATEC ( Super Smash Bros. Dead or Alive Tournament Executive Committee, a front for an organization experimenting in Genetic Engineering with the goal of creating a perfect soldier), Princess Peach Helena originally entered the SSB tournament to find out who killed her mother. She was originally disinterested in the politics of her father's company and wanted nothing to do with the tournament, but was left with no choice when Marth Donovan sent an assassin after her. Peach Helena believes Bowser Ayane is the assassin, but in truth, Samus Christie is. note: Not explaining profiles. Be happy knowing that they are in the style of DOAX profiles. Samus Christie Originally disguised as Princess Peach's Helena's maid, Samus Christie is later revealed to be a bounty hunter an assassin originally hired by Marth Donovan to spy on Peach Helena and eventually kill her. Although seemingly a strictly for-sale killer, Samus Christie has her own reasons for getting involved with SSBTec DOATEC; her parents were murdered by Ridely, an associate of Marth, when she was a young girl. She was then raised and trained as a bounty hunter by the Chozo. Or maybe not. I think she's just really just an assassin in DOA. ::shrug:: Zelda/Sheik Hayate/Ein Zelda Helena in the "Venus" bathing suit. The Venus is the most expensive and whorely suit in the game. note: Hayate is male and does not, strictly speaking, appear in DOAX. This bio is from the rest of the Dead or Alive series. Zelda Hayate is the Princess a ninja of Hyrule the Mugen Tenshin ninja clan, but was once kidnapped by Ganondorf and SSBTec DOATEC, where she was experimented on, brainwashed, and crippled. When she came to in Dreamland Germany, suffering from amnesia, she was taken in by Kirby Hitomi and Waddle Dee her father and trained in their dojo under the name Sheik Ein. Link Kasumi has abandoned his her village to find Zelda Hayate, a course of action that makes him her an outcast and earns him her the scorn of Young Link Ayane. Nana Leon Nana Ayane wearing the bathing suit "Pixie", a one-piece suit cut down the middle to the stomach. She is accessorizing it with her favorite jacket and hat headband. note: Leon is male and does not, strictly speaking, appear in DOAX. This bio is from the rest of the Dead or Alive series. On his herdeathbed, Popo Rolande, Nana's Leon's lifelong ice-climbing partner, uttered the words "The girl man I love is the strongest girl man in the world." She entered the SSB DOA tournament because she is insecure. Jigglypuff Lei Fang Jigglypuff Lei Fang, the spoiled only daughter of the famous Jigglypuff some rich Chinese family (and did I mention earlier that Helena is a famous opera singer?), is considered a child prodigy in the martial art of Jigglypuff Tai Chi Quan. She was onced saved from a gang of street thugs by Pikachu Jann Lee. While grateful, she feels that she didn't need the protection and could have taken care of the thugs by herself. She entered the SSB DOA tournament in order to beat Pikachu Jann Lee for her own pride. Pit Lisa Pit Lisa, while actually male female, is mistaken for a female by Mr. Game&Watch Zack. SHe seems to work for Mr. Game&Watch Zack, showing the girls around the island as they arrive. (SHe did not appear in SSBM DOA3, and is first introduced in SSBX DOAX.) Little is known about Pit Lisa other than that she and Samus Tina seem to be old buddies, and that she played volleyball in College. In SSBB DOA4, it is revealed that Pit Lisa is really the mysterious luchadora Kid Icarus La Mariposa and a former research scientist under the employ of SSBTec DOATEC who left when Marth Donovan tried to recruit him her for his anti- King Toadstool Douglas faction. And I haven't even played DOA4 so that might just be wrong. Trivia In the opening movie of SSBX DOAX, Jigglypuff Christie does some naked cliff-diving in the dark. These five twenty seconds got the game its M for nudity rating. - This is not the firstgame where Samus appears in a bikini. She's been doing that since 1986. - There is no nudity in the game, nor is there a nude cheat. The game is rated M for nudity because JigglypuffChristie does a little nude cliff-diving in the opening movie. - In the English language release of the game, Mr. Game&WatchZack is played by Kobe BryantDennis Rodman. - If you play as one of the girls in Super Smash Bros. MeleeDead or Alive 3 on a Gamecubean XBox with a SSBXDOAX save on it, it will automatically deposit money in that character's SSBXDOAX account. - A sequel is planned for the Wii360. Super Smash Bros.Dead or Alive. edit Bleach I'll do it later. I'm lazy. Leave me alone. edit License Note This article is a little strange, It takes the framework of a Death Note parody and applies it to commentary on the current state of the R1 anime industry. That is, it's a serious article disguised as a joke article disguised as a serious article. Also, at this point, monika explains shit has officially lost any semblance of what it once was. I'll begin with a brief discussion of Death Note and Death Note parody. Then I will explain the R1 industry in ways that pertain to the article. Unlike Death Note, I will leave all the value judgements to the reader. Death Note is an actual manga written by Ohba Tsugumi and illustrated by Obata Takeshi which ran from December 2003 to May 2006 (a year offset earlier than License Note is reported as running. I offset for two reasons - so that License Note would match up with the licenses I use, and so that the series would still be running when the article was written.) The character names as reported in the License Note article are unchanged, and the bios are fairly accurate, with the major changes being that the Death Note kills people instead of licensing anime, and L, Near, and Mello work for the ICPO instead of ADV and fansub groups. Also, in Death Note, Light is fully Japanese, and Misa is a pop idol who dresses in Gothic Lolita instead of a "Goth Idol", and she falls in love with Kira when he kills the guy who murdered her parents. Death Note is a favorite of parodists, for several reasons. - Death Note is popular. - L/Light is a favorite of yaoi fangirls, and anything that's a favorite of yaoi fangirls is a favorite of parodists. - Death Note has a fair degree of intrinsic quality to it. However, this is overshadowed by an equally high degree of failure. - Death Note makes some fairly heavy-handed moral judgements. - The conflict in Death Note is propelled forward by a Princess Bride-y logic game of "he must know that I must know that he must know..." - The characters of Death Note are all given a fairly interesting set of a priori character traits but nowhere near an appropropriate amount of character development after that. - Who dies in Death Note has been heavily spoiled. If you feel out of the loop now and really want to know, L and Light die as do , among many others, Penber and his wife (though she's never confirmed), The Yotsuba group, Watari, Rem, Wedy and Aiber, The US President, Light's dad, Takada, Mello, Mikami, and Misa, the last two shortly after the story ends. - Light really does look like Seto Kaiba from Yu-Gi-Oh! - Misa is really annoying. - The Rules of Death Note are just asking for it. The License Note article primarily borrows plot elements from Death Note without going too far down the direction of the standard parodies, mostly because it's all been done so much there is little to add. (I can't beat, for example, the <&Light|Hungry> chatscript where Light debates whether to go to McDonalds or Burger King based on how many clues L will find in the choice that point to Light being Kira.) But of course, I point out how annoying Misa is, make a handful of gay jokes, and rewrite the rules. I couldn't not. And now, The State of the R1 industry. I start with the companies mentioned in the article. - ADV, is one of the oldest and most successful R1 licensee companies, known for its fun, extras-filled releases and quality dub-work. Much of ADV's fortune, like Gainax's in Japan, comes from milking the Evangelion property. ADV is often seen as trying to create a monopoly in the US, with its many subsidiary companies and its strong ties to the Japanese industry, but in recent years, ADV has been cutting back, causing many to speculate on its financial stability after a string of years of excessive licensing of questionable shows. In License Note, ADV takes the role of the rich corporate entity whose success can be attributed to L, who works autonomously gathering licenses. The questionable licenses, however, are the result of Light's experiments with the License Note. - FUNimation is the R1 company that has undergone the most positive change in public opinion in recent years. Once known as a silly company famous for properties like Dragon Ball Z, FUNi gained credibility with good releases of shows like Kitty Grade, Fullmetal Alchemist, Case Closed, and Kodocha. However, FUNi always seems to have amusing licensing "issues". (These issues are attributed to Light in License Note.) They are currently working on their release of Beck, a show that required a significant amount of licensing negotiations, so we shall soon see if the FUNimation curse is lifted. - Geneon, formerly Pioneer, is the choice of R1 companies when you want a quality release with a straight dub but you don't care about the price. (Despite the imagecast, though, this doesn't necessarily hold, except the part about the price.) In any case, um, Geneon releases tend to have pretty cover art. - Bandai is a Japanese company with an American branch. It gets first dibs on Sunrise titles since they are children of the same parent family. For the properties it really cares about, Bandai tends to go all out with spectacular releases, special editions, and the like. - CPM is an oldschool company with a history of unconventional licenses. In the early days of DVD, CPM was plagued with problems stemming from a poor understanding of the new medium. In recent years, CPM is known for providing affordable release of forgotten classics (and other, not so good old shows no one ever licensed.) However, their business model is not particularly profit-getting, and recently CPM suffered a near-bankruptcy, and is currently working on selling off its old releases and getting back on its feet. - Japan-X is a new hentai company which distributes through Hustler Video and doesn't dub their releases, much to the annoyance of people who like their moaning in English. And the anime listed in Licence Note (There are a number of continuity errors in the license dates in reality and the license dates in License Note, and I'm not going to hide from them.): - The Melancholy of Suzumiya Haruhi - Unlicensed, currently confirmed the subject of a bidding war. - Haruhi is the story of a Japanese School Girl who also happens to be God and doesn't know it. All she wants in her life are time travelers, aliens, psychics, and a hot sardonic guy to bang, so she starts a club and recruits one of each. It's a quality show despite how overrated it is. R1 fans are really worried about how it will be handled by the licensee. One problem is that the show has two proper viewing orders. The order it was broadcast in Japan has the core of the story in chronological order, but a number of filler episodes interspersed as flash-forwards. The R2 DVD release order is almost completely chronological with the exception of a teaser episode to start the series. The explanation is that the broadcast order is ideal for a first viewing and the DVD order is better for subsequent viewings, (and given this, the R1 release should be in broadcast order,) but it's unclear if the winning licensee will understand this. (Furthermore a release schedule of six episodes to a DVD, four in each order, has been proposed but it's unlikely the R1 company would do something so nice.) Itsuki, who Light uses as the Kira company mascot, is really a fag psychic. - Tsukuyomi -Moon Phase- - Licensed by FUNimation, announced 10/28/2005 - A cute show about an underaged, cat-ear-wearing, gothic loli vampire girl. It's not a very good show, but it wins hard with its excessive levels of cuteness and addictiveness. In License Note, Light licensed it to FUNi as a guilty pleasure of his, but it is the kind of show that Misa is an unabashed fan of. - Ghost Stories - Licensed by ADV, announced 10/24/2004 - A positively crappy children's show that no R1 company in its right mind would license. ADV got this cheap, tacked on to another license (as stated in License Note) and, realizing it sucked, gave it a raunchy, ad libbed, Steven Foster dub. - Kodocha - Licensed by FUNimation, announced 4/10/2004 - Kodocha was a popular show when it began airing in 1996, but common sense indicated that it would never get licensed. (It is long-running shoujo.) In 2004, FUNi announced the license, to everyone's surprise. However, there were some catches. The band TOKIO was asking far too much for the popular first opening theme song, and FUNi refused to pay for it. Also, one of the band members made some cameos in the show, and, when FUNi wouldn't pay for the song, they were told that they could not use his voice either. Rather than cutting the dub completely, FUNi just muted out his scenes and replaced the opening with the second opening themesong. Such things cannot be coincidence, and are obviously caused by the License Note. - Fighting Spirit - Licensed by Geneon, announced 11/01/2003 - Ippo, like Kodocha, is a show that will never get licensed, until Geneon did. (In this case, it is a long-running sports anime.) - Planetes - Licensed by Bandai, announced 7/31/2004 - ΠΛΑΝΗΤΕΣ is an existential/office humor anime about the lives of Garbage Collectors in space. It is very good and you should watch it. It is very popular among NASA employees, and a bunch of them approached Bandai with offers of doing special features about how garbage collection in space is very important, which they were taken up on. The first three volumes of Planetes were release as two-disk special editions with fancy slip covers and really nice packaging, but halfway through the series, Bandai gave up on it and downgraded it to more pedestrian packaging. No wonder Light is bitter. - Votoms - Licensed by CPM, re-announced 12/26/2005 (originally released a long while ago) - Votoms is a classic mecha show. This isn't really a new license, but this rerelease is the power of License Note. In 2006, CPM rereleased Votoms as a 4 sets x 2 disks-per-set x $35 MSRP sub-only collection. The set is completely with an ammo can (lunch box, really) stuffed with special features. It is really the ideal release structure for a show of this (awesome) quality and (lack of general) appeal. Unfortunately, the top of the ammo can keeps falling off. - The Fuccons - Licensed by ADV, announced 7/5/2005 - The Fuccons is a show about Americans living in Japan. It is animated in the sense that humans are portrayed by mannequins. It's not worth your time. I keep getting copies of the promo disk that I can't get rid of fast enough. If you want one, ask me. - Full Metal Panic! - First season and spinoff licensed by ADV, second season licensed by FUNi, announced 7/29/2002, 5/15/2004, and 5/26/2006 - Honestly I think FMP! sucks, and can provide my reasons, but no one ever listens to me, but whatever. People like it. And everyone was shocked when FUNi got second raid. - Macross - Licensed as part of Robotech by, let's see, Streamline who rewrote and dubbed it and kind of gave it to ADV, and as Macross by Animeigo with the caveat that there be no dub since they didn't like Robotech, and then Animeigo lost the license and ADV got it and dubbed it as Macross in a completely different way from the Robotech dub, and I'm not even going to try to find all these dates, but the last ADV license was announced 7/2/2005 - Yeah. Macross. This is License Drama in its purest form. - Princess Tutu - Licensed by ADV, announced 4/9/2004 - This is a seriously good show despite its name. After the first volume didn't sell well, ADV delayed the second DVD for like a year. No wonder L is bitter. - Sister Princess - Licensed by ADV, announced 7/5/2003 - This is about a guy who has 12 (or 13) younger sisters that want him. Or something like that. - Cyberteam in Akihabara - Licensed by ADV, announced 8/9/2003 - This is about a 12-year-old girl in the future who wants a robotic pet but she can't afford one so she becomes a magical girl. Or something like that. - Akiba Girls - Licensed by Japan-X, announced late 2005 - The first hentai licensed by Japan-X. - Ranma ½ - Licensed by Viz since forever - Just giving this a shoutout since I use a character's name for L's pseudonym. - xxxHolic - Licensed by FUNi, announced 10/6/2006 - A Gothy Clamp show that isn't very good. The kind of thing Misa really likes but Light despises. This got licensed in the chunk of time between me thinking I might want to use it in the article and my writing the article. Such is the power of License Note. - Akagi - Unlicensed - Akagi is about yakuza playing mahjong. It's really good. Like Death Note, it has a lot of "I knew he'd expect me to do this so I was going to do that instead, but of course he knew that and I knew he knew that and he knew that [...] so I did this other thing instead and now he's dead!" going on, but it's much less annoying than in Death Note. It's the kind of show both L and Light like, but since L is dead now, it'll probably never be licensed. - Best Student Council - Licensed by ADV, announced 10/2/2006 - This is a show about a retarded girl with a hand puppet who gets elected to the student council of her high school because the lesbian president wants her. Or something like that. Seriously, do you expect me to watch all this crap? Everyone was shocked by the license, because so many GOOD shows came out that season and ADV picked this one that sucks. A fitting "fuck you" sendoff to L from Light. - Anime Sanjushi - Unlicensed - A remake of The Three Musketeers only Aramis is a hot chick. It's a very good show, only no one's fansubbed it. It's also the kind of show Light will eventually license, which is what prompts Mello to pull a gun on Near when Near suggests it as the next project. It is also probably the case that Light hasn't licensed it yet because he hasn't seen fansubs of it and so it is not as high on his licensing list as it should be. This is the balance between fansubbers and legitimate companies that Mello wishes to restore. Phew. That's it. That's all the background information you need to get most every joke in License Note. I am so sorry... PS LIGHT WAS RIGHT LIGHT WAS RIGHT LIGHT WAS RIGHT LIGHT WAS RIGHT LIGHT WAS RIGHT LIGHT WAS RIGHT UPDATES So, um, I'm going to keep the explanations as given, accurate to October '06, and use this space all the way down here to update news and whatnot. - The Melancholy of Suzumiya Haruhi was licenced by Bandai on 12/22/2006. - Death Note was licensed by Viz on 01/10/2007. That was less news than I thought I'd have. edit Moë David Ellefson of Megadeth, with a Hello Kitty Bass, modeled after his actual real life custom bass, and Lisa Loeb's custom Hello Kitty Fender guitar. Athena is the name of David's daugher, and the Greek goddess of War. Varg Vikernes holding a cat. In the original image, he's holding a spiked bat. Rick Allen is actually missing that arm, but in order to get the effect I wanted for the image, I used an image of him from before he lost the arm, and edited the arm out. The badage pattern is based on Rei's from Eva. The "penis with MS Paint added" picture is based on the 4chan TEH REI meme, making fun of people who moe all over Rei. Yuki of The Melancholy of Suzumiya Haruhi placed third in the 2006 SaiMoe competition, beating out co-characters Haruhi and Mikuru in rounds two and three. Here, she has been edited to wear corpse paint in the style of Abbath from Immortal. Let me tell you the story behind this article. It must have started in '83 or '84, when, during one of the many regular visits to my great-grandmother in New Jersy, my mom decided it was time she could begin to leave her toddler in the care of her middle-and-high-school-aged cousins while she and the other adults did adult things. What better way, my first cousins once removed must have thought, to entertain a two-or-three-year-old than to let her watch cartoons? And by cartoons, they meant the n-th generation VHS fansubs they were planning on watching anyway. And Robotech. (Robotech was in English.) And so, during my formative years, I was exposed to some of the manliest anime of the '70s. Space Pirate Captain Harlock, Lupin III, Aim for the Ace... Okay, that last one isn't so much manly as about high school girls beating the crap out of each other with tennis balls. These shows had a lasting effect on my idea of what manliness is. Manliness is a tall, thin, early-30s, girly-looking boy with pretty hair who will totally kick your ass and everyone else's if he feels like it. But he'd rather be alone in his room brooding over sake. Also, space pirate. (As a side note, I was highly disappointed by the '80s and '90s anime industry's selection of men. In the 80s, they were all accident-prone jokester perverts, and in the '90s they were all spineless nancy-boys. Luckily, the industry seems to be moving back in the right direction. In case you didn't read my explaination of Licence Note, Gankutsuou is a good show and you should watch it.) Anyway, in late high school and throughout college, I began to get into metal. Specifically, I began to get into having the hugest fucking crush on Cliff Burton. Man, that guy is hot. I'd so do him if he wasn't dead. Damn. So fucking hot. Anyway, metal has a similar aesthetic to '70s anime manly shows. Here we have a bunch of scrawny boys with hair down to their asses wearing leather and spikes and wielding axes (given the right sub-genre) and stuff. It worked for me. But not every metal musician made me want to jump him. A significant number of them brought forth an "awwwww! how cute!" reaction from me. Mostly because I'm completely screwed up but also because they are really really cute. David Ellefson was probably the first where I really realized how strong the reaction could be. He's really, really, really cute. Like, really. Early in the 21st century, the term "moe" gained significant popularity in the anime fandom. Precisely what it means is up to some debate; some will tell you it means a fatherly, non-sexual love towards a cute anime character, others will tell you it means a perverted, sexual love towards a cute anime character. The first group is right. The second group won the revert war at Wikipedia. (There you have it people, proof that Wikipedia is an unreliable source.) Well, when I got to grad school I kind of moved away from metal. My life was depressing enough, and I tried to compensate by listening to happier music. (This method doesn't work. I know that now. Your best bet is to listen to music that is one or two steps above you in cheeriness. I've been listening to a lot of Black Metal lately, and it's really been helping my mood.) In my third year, though, the show Metalocalypse began airing on TV. And DAMN if Toki isn't the most adorable thing ever. I believe it was some time during the second episode when I let out my first ever involuntary "moe~~~~~~~~~!" And then it struck me. THIS, despite its odd, non-little-anime-girl choice of target, is what EXACTLY moe is. Anyway, about the jokes in the article... They're all true with the exception of moe being spelled with an umlaut, and moe being something common in metalhead girls instead of otaku boys. And every time a word is a portmanteau of two other words, I change the source words. A few of the jokes are my standard take two unrelated things and pretend they're one thing kind of jokes. Most notably: - Varg Vikernes is combined with Nevada-tan - Rick Allen is combined with Rei Ayanami of Evangelion - Metalocalypse has been combined with your choice of Miyazaki works. Perhaps you liked Spirited Away or Princess Mononoke? I'll go with Castle of Cagliostro (which, according to Spielberg, is the best action movie ever) as the quote I have him saying is based on something he said about character Clarisse - Gorillaz has been combined with Negima!? Also, Jason Newsted really whines a lot like that. Poor guy was compared to Cliff Burton for most of his career, and he really isn't nearly as fucking hot. Oh man is Cliff hot. edit Varg Vikernes Explaination coming soon. Sooner than Bleach's, anyway. edit Marty Friedman or HowTo:Construct an article that spans 27.5 subpages and that no one will ever read. Then again, I've been writing this page for a long time, so I'm used to people not reading things. The general structure of the article The Marty Friedman article is a meta-article. The joke is not completely within the text of the article, but also within the story of the article. This is the story of a little Japanese girl a guitarist who finds himself the subject of an encyclopedia entry, an entry he feels doesn't convey who he really is. The story begins with the original article, followed by our protagonist's first edit, a sloppy but well-intentioned edit expanding upon what our protagonist considers to be his defining moments, and then the first revert, in the spirit of Wikipedia's "no original research" policy. This first act introduces the characters and the conflict between our protagonist and the encyclopedia. In the second act, our protagonist, enraged at this affront to his own self-image, begins to fight for his right to define himself against the overpowered encylopedia in the only way he knows - vandalism. He starts with a few small, well-placed changes to the article, taking the opportunity to knock those who have done him wrong, grandstand his own accomplishments, and rave against that which has abanoned him (and which he has abandoned.) The encyclopedia, omniscient as it is, quickly reverts these changes. Our protagonist counters with a swift blank-and-smilie vandalism, and is once again shut down. In act three, our protagonist has decided to play the encyclopedia's game. Instead of making edits to the original article, he writes a new article, one that, to him, is truth. He is, however, shut down one final time, his article locked for eternity. The story is told over 9 revision (and 15 diff) pages, a history page, and a user page for our protagonist. A revision page With the exception of the final page, each revision page starts with a header that looks like this: It may differ significantly from the current revision. You might say that this looks nothing like Uncyclopedia's old page headers. That's because it was copied from Wikipedia's old page headers. And the incredibly well-thought-out reasoning for that is that they are more eyecatching and I'm worried enough people won't get what's happening. Anyway, yeah, I just jacked that from Wikipedia. (That section is, by the way, encased is a big <noinclude></noinclude> tagset because the pages are included in the diff pages, and that part isn't needed there.) Of the nine revisions, seven of them are based on the "unvandalized" page. In these cases, this header (or in the case of R8, the locked template (similarly jacked from wikipedia but made more obvious for the purpose of making sure people get it by beating said people over the head with it)) is immediately followed by a {{User:Monika/Marty/Marty unvandalized}} or a {{User:Monika/Marty/Marty unvandalized|1}} or {{User:Monika/Marty/Marty unvandalized|3}}. (Note from the future: Even after this was all moved to the main namespace, the {{templatey bits}} were kept as my user space, so I wouldn't need to move things to the template namespace. I might fix this differently later, but ah fuck it.) The "unvandalized page" explained The unvandalized page is a giant-ass huge-ass template. Contained within is the entire original article and the two edits based on the original article. To call the original version, you simply include the page. To call revisions 1 or 3, you include the page modified by a | and the revision number. It works using parser functions that look like this. {{#ifeq: {{{1}}} | 1 | Text for revision 1 | {{#ifeq: {{{1}}} | 3 | Text for revision 3 | Unvandalized text }} }} {{#ifeq: {{{1}}} | 1 | Text for revision 1 | Unvandalized text, untouched in revision 3 }} {{#ifeq: {{{1}}} | 3 | Text for revision 3 | Unvandalized text, untouched in revision 1 }} {{#if: {{{1}}} | Text for revision 1 and 3 | Unvandalized text }} If you can't figure out what that means, then you'll never need to use it. Also, there are a lot of <ref> calls in the page. For annoying formatting reasons, they are all written <nowiki></nowiki><ref> blah blah blah </ref><nowiki></nowiki>. For some reason, without the empty nowiki tags, they'd look fine in preview, but, when included in other pages, would generate text that looked like this. Only not just one line. The whole article. And they'd then nest. The diff pages The diff pages were a pain in the ass. Luckily, because I rigged the history page, I only needed to make 15 of them. I started by generating "static copies" of each revision page in my sandbox. This meant replacing all includes with substs and all #ifs with subst:#ifs, then saving and repeating until all of them were gone. I also replaced all <nowiki></nowiki> pairings with, um, nothing. And categories were left in the original [[category:some category]] format instead of the modified format described in the next section. After doing that, I let uncyclopedia calculate the differece between the pages and then I jacked the table from the page source. Of course, that wasn't the end of it. See, the pages source is html, which is not quite the same thing as wikimarkup. Kept as is, anything in <a href=>linky name</a> tags ceased to be a link and anything in [[linky|linky name]] became one. Images expanded, '''formatted text''' came out formatted, and all sorts of bad things happened. To fix this problem with the minimum amount of actual work required, I simply ran some find/replaces on the text. I replaced [ with <nowiki>[</nowiki>, ' with <nowiki>'</nowiki>, { with <nowiki>{</nowiki>, toc with <nowiki>toc</nowiki>, and http with <nowiki>http</nowiki>. For this reason, the page source is huge and unreadable. But it looks good on wiki and that's what's important. For the same reason, I went through and added spaces to long links so that the diff table would stay within the page width like it's supposed to instead of expanding the page sideways like it does in reality. Just for you. Categories This article belongs to the following categories: [[Category:Japanese]][[Category:Guitarists]][[Category:Pop musicians]][[Category:Awesome People]][[Category:Jewish]][[Category:Heavy Metal Maniacs]]. More specifically, the original version belonged to [[Category:Jewish]][[Category:Guitarists]][[Category:Heavy Metal Maniacs]] and the rewrite belonged to [[Category:Japanese]][[Category:Guitarists]][[Category:Pop musicians]][[Category:Awesome People]]. Or more more specifically, before I came up with the thing I'm about to explain, seven revision pages and thirteen diff pages belonged to [[Category:Jewish]][[Category:Guitarists]][[Category:Heavy Metal Maniacs]] and one revision and one diff page belonged to [[Category:Japanese]][[Category:Guitarists]][[Category:Pop musicians]][[Category:Awesome People]]. This is clearly stupid. To fix the problem, I replaced all the category sections in the pages that are not revision 7 with something that looks like this: which behaves exactly like the category section but doesn't actually add the articles to the categories. In revision 7, I added all six categories the regular way, and then added: in such a way that, in Firefox, it covers up the real categories. This doesn't work in IE, which makes me want to stab people. Dr. Skullthumper is magic. Anyway, this way, each page links to the categories it's pretending to be in, and each category links back to the main page only, even if the main page isn't in that category (unless you use IE). Dynamic dates This revert war took place between 8 days ago and yesterday, no matter when you look at it. Why? Because I can. How? Well, every time I needed an edit date, I used something like XX:XX, {{ #time: j F Y | -D days }}. where the XX:XX is whatever time the edit took place, and the -D is how many days ago the edit took place. What I'm kicking myself for not doing Even with the extensive use of parser functions and includes in this article, there was still a lot of stuff that was painstakingly copied from one subpage to another, stuff that could have been templated, such as usernames and times of edits. And those header thingies. And formats for diff pages and shit. Next time I do an article like this (NEVER), I will use a shitton of templates. But since doing that would otherwise create a shitton of subpages for the article, I'll just make them all one big template (say, [[articlename/onebigasstemplate]]) and make one of the options which subtemplate you intend to use. Then I'd stick the whole thing ine one big parser function, and you'd call it like this: {{articlename/onebigasstemplate|whichfuckingsubtemplate=thatonewiththething|blahblahblahotheroptions}}. Feel free to steal my idea, since I was stupid enough to post it. edit I am going to use this space to apologize for all the jokes in my articles, thus clearing my conscience. Don't read this unless you think I might feel bad I made fun of you. edit Megadeth I've hurt a lot of people I deeply care about in this article. Dear Dave Mustaine, I am sorry for gathering together all your hypocrisy together on one huge page and making a big joke about it. I understand that you have a lot of trouble with personal relationships stemming from your childhood raised by a single mother who wasn't always there for you and her unsuportive family. I don't judge you for what you've done to everyone; that's between you and God and I know you talk to him daily, like, for real now and not just because it worked in a song. I hope you can forgive me. - -monika Dear David Ellefson, I am sorry for constantly making fun of how young you used to look when you were younger. I am also sorry if this or any other article I've written makes you look like a pushover or a pansy. I actually admire your patience, your tact, your ability to deal with people like Dave. You're the ultimate bassist. (And don't tell anyone else, but you're my favorite member of Megadeth. Also, I will be buying that new F5 album the day it streets. Please tour it and come somewhere near me.) - -monika Dear Marty Friedman, めんご. - -monika Dear Don Menza, I'm sorry for implying that letting your young child hang out with Buddy Rich was child abuse. Nick turned out fine and I would like to thank you for impregnating his mother. I'm also sorry for calling you a footnote. The Pink Panther theme is awesome, and I also like a lot of the other things you've done. I wish I could breathe circular. That is a seriously awesome skill. - -monika Dear Nick Menza, I'm sorry for making fun of your cancer scare. I too have been kicked out of stuff for trying to take a medical leave. I'm currently dealing with it in a way that'll hopefully result in me getting un-kicked-out, and if I end up getting kicked out again a few weeks later (likely) then I will totally know what it's like to be you. You are just as adorable as David. You are my favorite former Megadeth drummer alive. - -monika Dear Jeff Young, I'm sorry for not making fun of your cancer. You're probably thinking "Couldn't she have just said 'Jeff Young has testicular cancer' and left all the other stuff out?" Yes, I could have, but it seemed too much with the Menza stuff shortly after it. I felt for balance I should leave it out and instead make fun of all the other stuff. If it makes you feel any better, I am also in a world music band. - -monika Dear Chris Poland, Your finger is creepy. Stop letting Dave convince you to rejoin Megadeth. By the way, I like collecting music made by former members of Megadeth, and the one thing I've been stuck on for the longest time (both because I am super-interested in it and because it is a lot harder to find than everything else I've looked for) is the Jazz-Fusion stuff you did before Megadeth. If you have anything, please send it to me. - -monika Dear Chris Broderick, I like that your favorite album ever is Perpetual Burn. I really like that album too, and an appreciation for Jason Becker is something I look for in a Megadeth Guitarist. You are my third favorite Megadeth lead guitarist ever, and that's seriously saying a lot. I honestly really really hope you spend a very long time in the band. Don't let Dave abuse you too much. - -monika Dear Gar Samuelson, I'm sorry for implying that Dave only speaks highly of you because you're dead, and I'm super super super sorry for implying that he only speaks highly of you because you're dead because he's worried you'll come back as a zombie and attack him. Please don't eat my brains. - -monika edit From here I am going back to explaining things. I'm creating this new section so I can keep my articles in order. If you aren't sure if you should be reading this or not, read the introduction to #I am going to use this space to explain all the jokes in my articles, thus rendering them unfunny. edit Fermat's Penultimate Theorem If you don't like math or big words or things like that, here's a version of the article just for you. Fermat's Second-To-Last Theorem "4chan" thinks this is funny according to this article because it is a chimp with a log up its butt and log is a math pun and it's a picture of an animal with text on it. 4chan invented lolcats, but they don't call them lolcats over there. Fermat's Second-to-Last Theorem is a thing that says: Something simple but in complicated terms (like the unabridged version of this article) in big letters. This thing is stupid and nobody cares. Also, Fermat was stupid because some already solved this problem. This is an easy problem and Fermat wasted lots of space on it. Because if this, he couldn't write down his proof to his last theorem. Nerds think this is really funny for some reason. On math humor sites, they make fun of it the same way people make Oscar Wilde jokes at Uncyc. Fermat's Last Proof Because Fermat didn't prove his last theorem, his last proof is for his second-to-last theorem. He wrote it in the copy of some really old math textbook that his wife got him because she doesn't understand him. He used this book like a diary, which was what they used to call blogs. Dear Diary, I was thinking blah blah blah math math math math math math math mistakes blah blah blah backtrack and start over... math math math It's dinner time and I'm fat and hungry. I'll finish this later. Back. I don't understand what I wrote above this. math math math I don't know how to solve so I'll write to a friend. Then Fermat's dad told him to get a real job. The Proof Someone Else Did First All that thing says is that when you add two numbers together, the result isn't xbox huge compared to the other two numbers. Even I knew that and I'm reading an article version that was written just for people who don't like math and big words. Fermat's Second-To-Last Theorem in "Popular" Culture Only nerds think this is funny. - Star Trek nerds think this is funny. - People who listen to nerd rock think this is funny. Also people who listen to nerd rap. - Something about a program only math people use. - Something about "mathworld" but really about Uncyclopedia. That's one of those self-hating Uncyclopedian jokes. Again, only nerds think this is funny. edit What would Cliff Burton do? I wrote this article entirely under the influence of Nyquil and Cēpacol. That's my explanation. edit UnNews:Obama unveils education reform plans asplained Then again, what do I know? I almost didn't write this article. I almost wrote an article for Best Illustrated but I was having hard drive issues at home and at work only have access to GIMP, which is just like Photoshop only it hates me. And even if I had had a working Photoshop available, I probably would have realized that the particular article I had in mind would only serve to confuse and annoy the judges and everyone else, much like my last attempt at PLS did. (I would have realized it, but I probably would have entered anyway. Hell, I'll probably consider it again next time around.) I almost considered finding another category and topic based just on the judges. Nothing against any of them. In fact, Led and Gerry are total sweethearts and also completely rational people, and my interactions with have been pleasantly minimal. It was more of a percieved incompatibility of subject matter and style with taste. Anyway, I wrote this article. I almost regret it. I don't really like it. But whatever. I also almost didn't write this explanation. I seriously just deleted it and wrote about my experiences with standardized testing (all pleasant or profitable) and then hit the back button a few times. Drugs are bad. Stay in school. That's what everyone always told me. edit User:Monika/explainsshit#UnNews:Obama unveils education reform plans asplained I was having a bad day. Keep scrolling for an actual explanation of the article. edit UnNews:Obama unveils education reform plans explained Let's try something new, and by something new, I mean something I used to do on this page a lot. UnNews:Obama unveils education reform plans Now this is how you teach civics. This article starts with a press conference quote from Obama that hints at what is to come. It makes a quick swipe at Bush's No Child Left Behind policy (which is really the primary target of this article) and then suggests that the strength of an education system is measured by the smartest kids, not the dumbest kids. There are also a few "uh"s in there and some slightly-awkward-when-transcribed speech patterns for Obamarealism. It also sets up the premise that Congress and America was presented with this legislation on Friday, March 20th, and provides a brief outline of the process of how this proposal will becomes law. Second paragraph furthers the premise that this proposal is the polar opposite of NCLB and also starts the joke that the media coverage of this bill is going to be silly and partisanish. Third paragraph makes the joke that congress is super-inefficient, even when it comes to the children and other important topics. It also makes fun of a particularly stupid real congresswoman who, despite never graduating from college, thinks she knows what is best for education, and vocally promotes basing public school curricula on the bible. The You Can Lead a Child to Education but You Can't Make Him Learn Act This section explains the basics of the proposal. The main point is that catering an education to the smartest single kid in the class makes for a better education than catering to the dumbest. (Of course, there is no middle ground to consider.) Like NCLB, the act is primarily concerned with alloting federal funding for public schools based on performance. "Florida Drift" is a real problem, although it's neither called that nor quite that extreme in the real world. A lot of states, especially Florida, rewrote the tests each year to be easier, in order to empirically show that they were improving education. NCLB as written actually encouraged this. This lowering of standards, in addition to other curriculum adjustments made in response to NCLB, led to measurable decreases in the quality of education children received. (Also, lots of college professors bitching about how incoming students have the writing skills of ten-year-olds...) The test descriptions I included just to keep up the premise. They are closer to AP tests than SATs, and that's not necessarily a bad thing if only top tier students are taking them. The "teacher salary" thing I figured needed to be touched on a little bit, and the federal government sending enforcers to clean up stupid schools I thought was a cute idea. Class of 1999 is a fun movie and you should watch it. First you should watch Class of 1984 though. The Finally, Some Sense in Elementary and Secondary Education Act A short clip of the film adaptation. Ignore the number after the title. It's just a short clip for reference, not the entire movie in six minute intervals and Greek subtitles on youtube or anything. (Try google video.) This section focuses on pro-policy arguments. The intro paragraph's description of the ignorant and proud of it crowd is far too real for Uncyclopedia but I included it anyway because it makes me sad. (By the way, for those of you who don't follow American politics (or civics), Sarah Palin, poster child of all that stuff, can't run until 2012. This is one of those "actual jokes" I get accused of not making enough of.) The paragraph ends with a bit of delusion that it used to be much better than this, when it reality, it was merely a bit better than this. Second paragraph I didn't want to make too obvious, but if you're here, that's what you're here for. The first "administration official" is Malia Obama, the president's older daughter. The third is Sasha, his younger daughter. If you're unfamiliar with Harrison Bergeron by Kurt Vonnegut, check it out. (It's very short, if you don't like to read, but if you don't like to read, you won't get it. I don't know what the plan was there...) The movie with Sean Astin also good. And there's a short film coming out soon called 2081 that's going to be more faithful to the original story. Better for the smart kids During NCLB's reign, there were stories of parents with very smart children (think "can read perfectly by age four" type smart) who reported that their children's teachers would attempt to get the children held back a year so as to increase the teacher's students' average performances on standardized tests. Luckily, most parents with that kind of child are also good, involved parents, so teachers were rarely successful, as far as we know. Anyway, I skipped over that part of NCLB criticism for a joke about bloggy internet movements, the priorities of children, and an "Obama reads Spiderman and that's funny because he's the president and Spiderman is a comic book" joke. Freedom!, by the way, was one of my favorite MECC video games, even though it was recalled shortly after it came out. In the game, you played a runaway slave who had to reach a free state by means of the Underground Railroad. It was a very difficult game, and most of the time you ended up dead or recaptured (often crippled beyond hope of ever re-escaping) and it had a lot of interesting game-theory and probabilistic elements. At the beginning of the game, you were randomly assigned as being able to read or not. If you could not read, all signs along the way were translated into gibberish, and by "translated into gibberish" I mean hit with a substitution cypher. This, of course, meant that the smarter player could learn how to read pretty quickly even when the character could not. It was a fun and challenging game. Better for the dumb kids Here, I go the easy route of actually presenting the argument against NCLB, specifically that the students who couldn't keep up with the requirements were actually left behind by being dropped from or encouraged to leave the public school system. Better for the average kids, even An actual argument with a bit of fictional flourish from an actual child psychologist who might actually agree to some extent with the words I put in his mouth. And if you think that last Olbermann rant is distressing, check out some Quiver-full movement rhetoric. Or watch Idiocracy. (For future readers who have the benefit of not being stuck in our current news cycle, the other references are to Nadya Suleman and the Palin Bunch.) The Why Do You Hate American Children Act This section is wharrgarbl based on increasingly stupid conjectured real arguments from hypothetical people opposed to this proposal. wharrgarbl. The Blue Penis Act And finally, to top it all off, we have the kind of "media feud" that the media likes to cover these days. I'm with Rachel on this one. I don't know how many times I've had to say "Blue Penis" in reference to this article, but it doesn't even come close to being a proper reflection of the actual reaction to blue penis. The blue penis is really just a small part of this article and while important symbolism in and of itself, is one of the least meaty bits of the article. But of course, not only do people not understand why there is blue penis or what purpose it serves in the article, they also focus on the blue penis as if it were the only part of this article worth mentioning. Blue penis.
http://uncyclopedia.wikia.com/wiki/User:Monika/explainsshit
CC-MAIN-2015-40
refinedweb
13,423
72.05
Convert Object to String (Java) This program is under development. Please help to debug it. When debugging is complete, remove the {{develop}} tag. Because of a neat compiler trick, any Java reference type or primitive that appears where a String is expected will be converted into a string. For an reference type (object), this is done by inserting code to call String toString() on the reference. All reference types inherit this method from java.lang.Object, and most (if not all) classes provided in the standard Java Application Programmer's Interface (API) override this method to produce a string that represents the data in a form suitable for printing. To provide this service for Java's primitive types, the compiler must wrap the type in a so-called wrapper class, and call the wrapper class's toString method. Other classes, for example those that you write yourself, will inherit the standard Object.toString() which produces something like "ClassName@123456" (where the number is the hashcode representation of the object). Thus even user-defined classes will print something! If you want something friendlier, you have to provide a method with the signature public String toString(). But once you have done so, you get the "print friendly" behaviour of the built-in Java classes for free. This article will present more detail on Java's built-in string conversion methods with examples of converting and concatenating literal strings, creating strings from the reference types provided in the standard Java APIs, and the use of the wrapper classes to print the primitive types. In doing so we will present a description of the Object.toString method which is responsible for most of the magic that the Java compiler performs to convert things to strings. The article concludes with a description of a user defined type's default printing behaviour and its behaviour after introduction of a custom toString method. It is hoped that the article will illustrate some of the background to Java's automated string coercion. It also illustrates some of the extra complexity that a language which makes a distinction between object types and primitives creates for itself. [edit] The Automatic Conversion of Objects to String in String Literals The developers of the Java language recognized that the manipulation of strings would be important to the developers of Java programs. They thus wanted to make the representation and manipulation of strings as straightforward and transparent as possible. They provided language support for the representation of strings as literals; compiler support for the automatic transparent conversion of string literals into String objects; and the automatic conversion from any reference type (that is any subclass of class Object which is more informally called an object) into a printable string by use of the Object.toString method. Despite this extra syntactical support for string literals and automatic object-to-string conversion, String literals and literal expressions are still represented and manipulated internally by the Java run-time system as String objects. We shall demonstrate this behaviour by means of a simple example. [edit] 1 + 2 = 3 Consider the print statements: <<automatic call to toString>>= System.out.println("Proof that 1 + 2 = 3"); System.out.println("1 + 2 = " + (1 + 2)); When you run this code it prints to the console: Proof that 1 + 2 = 3 1 + 2 = 3 which, despite Java's wordy print statement System.out.println, is very convenient. When this code is compiled, the Java compiler notes the signature of the method calls to the print statement is System.out.toString(String arg). It therefore knows that the arguments must be converted to a String. In the first expression, String arg is the string literal "Proof that 1 + 2 = 3" and it is converted to a String object by a call to the String constructor: <<convert string literal to String object>>= String literal = new String("Proof that 1 + 2 = 3"); The first part of the second print statement is a string literal "1 + 2 = " and is effectively converted to the java expression: <<convert string literal to String object>>= String arg0 = new String("1 + 2 = "); The next part of the expression is the + operator which in the string context means concatenation. The right-hand argument of the concatenation operator is an expression 1 + 2 which has value equivalent to the expression int arg1 = 1 + 2;. This expression is of type int and must be converted to a String before the concatenation can be performed. To convert int to String we must call upon the services of the wrapper class Integer (discussed further later in this article). The Integer class has a method toString which can convert an int value into a String. Thus the compiler will generate code equivalent to: <<convert int to String object>>= String arg1 = new Integer(1 + 2).toString(); Now the expression is reduced to <<concatenate arguments>>= String arg = arg0 + arg1; To actually print the strings, System.out.println will call String.toString() and pass the resulting bytes to the console's output stream: <<print String objects using toString>>= System.out.println(literal.toString()); System.out.println(arg.toString()); Putting all of this together, the long way to print a pair of String literals is: <<print 1 + 2 = 3 the long way>>= convert string literal to String object convert int to String object concatenate arguments print String objects using toString It is perhaps important to note that this code is close to the code that the compiler actually creates and that Java virtual machine (JVM) must execute to perform this simple operation (although compiler optimization is able to do some simplifications). To prove this we use a pair of assertions: <<proof that 1 + 2 = 3>>= assert ("Proof that 1 + 2 = 3").equals(literal.toString()); assert ("1 + 2 = " + (1 + 2)).equals(arg.toString()); [edit] In String-world 1 + 2 sometimes equals 12 The observant reader will have noted the use of parenthesis in the expression "1 + 2 = " + (1 + 2) and may be wondering why. If we instead wrote: <<not the way to do it>>= System.out.println("Proof that sometimes 1 + 2 does not make 3"); System.out.println("1 + 2 = " + 1 + 2); the result would be the equivalent of: <<not the way to do it>>= System.out.println(new String("Proof that sometimes 1 + 2 does not make 3").toString()); System.out.println((new String("1 + 2 = ").toString()) + (new Integer(1).toString()) + (new Integer(2).toString())); printing Proof that sometimes 1 + 2 does not make 3 1 + 2 = 12 rather than the expected Proof that sometimes 1 + 2 does not make 3 1 + 2 = 3 We can ask the compiler to prove this with another assertion: <<counter proof that 1 + 2 doesn't always make 3>>= assert ("1 + 2 = " + 1 + 2).equals("1 + 2 = 12"); The technical reason for this, perhaps, counterintuitive behaviour is that under the standard Java operator precedence rules, the concatenation operator binds tighter than the addition operator. Thus the compiler sees "1 + 2 = " + 1 + 2 as a sequence of string concatenations. We need the parenthesis to force Java to perform addition rather than concatenation on the arguments 1 + 2. [edit] Program 1 Here is the code for the first example in executable form: <<StringLiterals.java>>= public class StringLiterals { public static void main (String [] args) { System.out.println("Demonstrating that Java's built-in support for string literals\n" + "relies on the toString method and string concatenation"); automatic call to toString print 1 + 2 = 3 the long way proof that 1 + 2 = 3 not the way to do it counter proof that 1 + 2 doesn't always make 3 } } You should note that the test relies on the assert keyword, which was added to Java in JDK 1.4. To run this code you will need to compile the class StringLiterals using a compiler from JDK 1.4 or later. The code for assertion checking is usually turned off, so to have the assert statement executed, the compiler class must be run using the -ea argument: javac StringLiterals.java java -ea StringLiterals We will use the assert keyword in the other programs in this article, and they all must be compiled and run in the same way. [edit] More about Object.toString() As demonstrated in the previous section, much of the magic of string literal manipulation is performed behind the scenes by the String toString() method that all Java reference types, or objects, inherit from java.lang.Object. To illustrate this, let's examine the string generated by Object itself. <<Printing objects>>= Object o = new Object(); System.out.println("Object printed with toString: " + o); An easy and elegant way to force an object to become a string is to concatenate it with a prepended empty string literal as in: <<Printing objects>>= String objectAsString = "" + o; System.out.println("Object as a string literal: " + objectAsString); Of course, because both expressions will eventually call o.toString(), we'd expect the two to be equivalent. We use an assertion to prove that they are: <<Printing objects>>= assert objectAsString.equals(o.toString()); A typical run of the program will produce: Object printed with toString: java.lang.Object@3e25a5 Object as a string literal: java.lang.Object@3e25a5 As you can see, the default behaviour of the Object.toString method is to return the fully qualified name of the calling class (in this case java.lang.Object) followed by an hexadecimal code representing the "address" of the object in Java's run-time heap store. The code @3e25a5 that follows the class name will not necessarily be the same when you run the program, or even between program runs. It is usual for class designers to override the Object.toString method so that classes are able to print useful information about themselves. In the remainder of this section, we shall examine how the designers of the Java API have chosen to override toString for a selection of the most commonly used classes. [edit] Printing Numbers In Java, numbers are not objects, but instead, perhaps for performance reasons, are represented by so-called primitive values. The following simple function uses automatic string conversion to produce the table of numbers that often accompanies Java textbooks. The mechanics of how this is done is discussed in the next section. <<print table of numbers>>= public static void printTableOfNumbers() { String table = "Table of the Java Numbers\n" + "Primitive Type\tSize in Bits\tTypical Value\tMinimum Value\t\tMaximum Value\n" + "==============\t=============\t============\t=============\t\t=============\n"; // byte table += "byte\t\t" + 8 + "\t\t" + (byte)5 + "\t\t" + Byte.MIN_VALUE + "\t\t\t" + Byte.MAX_VALUE + "\n"; // short table += "short\t\t" + 16 + "\t\t" + (short)1024 + "\t\t" + Short.MIN_VALUE + "\t\t\t" + Short.MAX_VALUE + "\n"; // int table += "int\t\t" + 32 + "\t\t" + 65536 + "\t\t" + Integer.MIN_VALUE + "\t\t" + Integer.MAX_VALUE + "\n"; // long table += "long\t\t" + 64 + "\t\t" + 1000000000L + "\t" + Long.MIN_VALUE + "\t" + Long.MAX_VALUE + "\n"; // float table += "float\t\t" + 32 + "\t\t" + 3.14215926 + "\t" + Float.MIN_VALUE + "\t\t\t" + Float.MAX_VALUE + "\n"; // double table += "double\t\t" + 64 + "\t\t" + 100e100 + "\t\t" + Double.MIN_VALUE + "\t\t" + Double.MAX_VALUE + "\n"; table += "==============\t=============\t============\t=============\t\t=============\n"; System.out.println(table); } We add this to the code for PrintingObjects as shown here: <<Printing objects>>= printTableOfNumbers(); The result should be: Table of the Java Numbers Primitive Type Size in Bits Typical Value Minimum Value Maximum Value ============== ============= ============ ============= ============= byte 8 5 -128 127 short 16 1024 -32768 32767 int 32 65536 -2147483648 2147483647 long 64 1000000000 -9223372036854775808 9223372036854775807 float 32 3.14215926 1.4E-45 3.4028235E38 double 64 1.0E102 4.9E-324 1.7976931348623157E308 ============== ============= ============ ============= ============= [edit] Printing Booleans and Characters The boolean primitive has only two values true and false. Let's use these values to produce a truth table for the Exclusive-OR (XOR) operator which is implemented in Java as the ^ operator. <<XOR table>>= public static void truthTable() { String table = " A \t B \t A XOR B\n" + "======\t=======\t========\n" + false + "\t" + false + "\t" + (false ^ false) + "\n" + false + "\t" + true + "\t" + (false ^ true ) + "\n" + true + "\t" + false + "\t" + (true ^ false) + "\n" + true + "\t" + true + "\t" + (true ^ true ) + "\n" + "======\t=======\t========\n"; System.out.println(table); } <<Printing objects>>= truthTable(); The result is: A B A XOR B ====== ======= ======== false false false false true true true false true true true false ====== ======= ======== To illustrate the printing of character values we'll take a string "Now is the time for all good men to come to the aid of the party", convert it into an array of char, perform a 5 character shift cypher on each character and then print the resulting cypher text. <<simple cypher>>= public static String simpleCypher(String plaintext) { char [] letters = plaintext.toCharArray(); String cypherText = ""; for (int i = 0; i < plaintext.length(); i ++) { char letter = plaintext.charAt(i); letter += 5; cypherText += letter; } return cypherText; } <<Printing objects>>= System.out.println(simpleCypher("Now is the time for all good men to come to the aid of the party")); Here is the result: St|%nx%ymj%ynrj%ktw%fqq%ltti%rjs%yt%htrj%yt%ymj%fni%tk%ymj%ufwy~ [edit] Printing Strings Complete me A discussion of what happens when String.toString is called. [edit] Printing Metaclass information Complete Me What is printed when Class.toString() is called and how Object.toString() is able to call upon this meta class behaviour to print something for any object. [edit] A Selection of further Examples from the API Complete Me A selection of examples to illustrate what is printed by other classes for example Data, List, Map, RegExp. This will help us to formulate a discussion of how to write our own toString methods. [edit] Program 2 This program exercises all of the examples from the discussion of "More about Object.toString{)." Complete Me <<PrintingObjects.java>>= include a selection of classes from the API public class PrintingObjects { public static void main (String[] args) { Printing objects } print table of numbers XOR table simple cypher } [edit] Converting Java Primitives to Strings As discussed in the first part of this article, when called upon to print a primitive value, the Java compiler will insert code to wrap the primitive in an instance of the primitive type's wrapper class. It will then call the wrapper class's toString() method. Thus, when compiled: long l = 1000000L; String ls = "" + l; will actually produce code equivalent to: long l = 1000000L; String ls = "" + new Long(l).toString(); In this section, we explore this behaviour in more detail. Complete Me [edit] Define the test fixture To demonstrate the behaviour of automatic string coercion, we'll first create a set of primitives: <<Declare the primitives>>= boolean flag = false; char c = 'A'; byte b = (byte)127; short s = (short)32767; int i = 10000000; long l = 1000000000000L; float f = 3.1415926F; double d = 1.0e99; Note you need to use a cast with the byte and short literals because the Java compiler will always convert a literal integer into an int. Similarly the postfixes L and F are needed because otherwise Java will compile an integer literal as an int and a floating point literal as a double. To make the code of the test itself easier to understand, we create wrapper classes for each of these cases and add them to an array list. <<Wrap the primitives and add to an array list>>= ArrayList primitives = new ArrayList(); Boolean wflag = new Boolean(flag); primitives.add(wflag); Character wc = new Character(c); primitives.add(wc); Byte wb = new Byte(b); primitives.add(wb); Short ws = new Short(s); primitives.add(ws); Integer wi = new Integer(i); primitives.add(wi); Long wl = new Long(l); primitives.add(wl); Float wf = new Float(f); primitives.add(wf); Double wd = new Double(d); primitives.add(wd); [edit] Defining the test For testing purposes, we want to take each element of the array list, convert the object to a string using Object.toString(), and compare the resulting string with the result of automatic String coercion that results from "" + value: <<Test primitive to string conversion>>= for (int primitive = 0; primitive < primitives.size(); primitive++) { Object wrapper = primitives.get(primitive); System.out.println("Testing toString for " + wrapper.getClass()); get result of toString get result of automatic string coercion compare results } All wrapper classes override the toString(), so to get a printable string we simply call this method. There is no need to force a cast, as the correct version of toString method will be called at run time due to polymorphism. <<get result of toString>>= String stringFromWrapper = wrapper.toString(); The proof that wrapper.toString() is the same as the "" + value will be trickier because Java's static type checking will not allow us to easily extract the value stored in the wrapper class. For now let us postulate that we can create a static method coerceWrappedPrimitiveValueToString(Object) that will take a wrapper object and return the value of the wrapped primitive coerced to a string. With this method available we can write: <<get result of automatic string coercion>>= String coercedFromValueOfPrimitive = coerceWrappedPrimitiveValueToString(wrapper); and the final test is now simply: <<compare results>>= assert stringFromWrapper.equals(coercedFromValueOfPrimitive); [edit] Validating the auto string coercion We want to prove that the general statement "" + primitive is converted by the compiler to new Wrapper(primitive).toString(). <<define a method to coerce a primitive to a string>>= static String coerceWrappedPrimitiveValueToString(Object o) { process according to type of wrapper object } Unfortunately, Java's static type checking makes the coercion of the primitive value of a wrapper class much more difficult than the simple polymorphic call to wrapper.toString(). There is no equivalent polymorphic call valueOf that can return a primitive of undefined type. Instead, each wrapper class provides method with signature type typeValue(). We will need to use run-time type identification (RTTI) to cast the object parameter to its wrapper class before we can extract the value and coerce the value to a string. We will need to identify the class of the wrapper object o before we can assign the wrapped primitive value to a value of the correct type. We will use a decision tree for this: <<process according to type of wrapper object>>= if (o instanceof Boolean) { return value of Boolean coerced to String } else if (o instanceof Character) { return value of Character coerced to String } else if (o instanceof Byte) { return value of Byte coerced to String } else if (o instanceof Short) { return value of Short coerced to String } else if (o instanceof Integer) { return value of Integer coerced to String } else if (o instanceof Long) { return value of Long coerced to String } else if (o instanceof Float) { return value of Float coerced to String } else if (o instanceof Double) { return value of Double coerced to String } else { unexpected class: return default coercion } Having decided which wrapper class we are dealing with, we can get at the primitive type and coerce it to a string. Here's the example for the Long case. You'll note that we have to downcast the reference from Object to Long before we can access the longValue() method: <<return value of Long coerced to String>>= long l = ((Long)o).longValue(); return "" + l; The other types follow the same pattern: <<return value of Boolean coerced to String>>= boolean b = ((Boolean)o).booleanValue(); return "" + b; <<return value of Character coerced to String>>= char c = ((Character)o).charValue(); return "" + c; <<return value of Byte coerced to String>>= byte aByte = ((Byte)o).byteValue(); return "" + aByte; <<return value of Short coerced to String>>= short s = ((Short)o).shortValue(); return "" + s; <<return value of Integer coerced to String>>= int i = ((Integer)o).intValue(); return "" + i; <<return value of Float coerced to String>>= float f = ((Float)o).floatValue(); return "" + f; <<return value of Double coerced to String>>= double d = ((Double)o).doubleValue(); return "" + d; Finally, for the case where no match is found, we simply let Object.toString() do its thing: <<unexpected class: return default coercion>>= return "" + o; [edit] Program 3 The complete test class is: <<ConvertPrimitivesToStrings.java>>= import java.util.ArrayList; public class ConvertPrimitivesToStrings { public static void main (String [] args) { System.out.println("Demonstrating that Java's built-in support for converting primitives\n" + "to Strings relies on the toString method"); Declare the primitives Wrap the primitives and add to an array list Test primitive to string conversion } define a method to coerce a primitive to a string } [edit] Using Automatic String Generation in your own Classes [edit] The Default behaviour Complete Me Will discuss what happens when you don't override Object.toString when you write your own classes... [edit] Writing Your Own toString Method ... and the benefits you and your clients will get for free when you do. [edit] Using Jakarta Commons For more complex classes, it can become tedious to have to write consistently good consistent toString methods. The Apache Jakarta Commons library comes to the rescue here with the org.apache.commons.lang.builder.ToStringBuilder(). We will illustrate the use of this in a custom class of own own and demonstrate some of its other features, such as the use of reflection to completely automate the creation of a toString method, or even to interrogate foreign objects. This particular section may be a little too advanced for this introductory article and it may benefit from presentation in a separate article. Complete Me [edit] Discussion Note: needs revision! Clearly you can call toString() yourself whenever you need a printable version of an object or a primitive, but arguably the compiler's default behaviour allows you to write clearer and cleaner code. [edit] References Joshua Block, Effective Java, Sun Microsystems Press, Addison Wesley, 2002. Bruce Eckel, Thinking in Java, 4th Edition, Prentice-Hall, 2006.
http://en.literateprograms.org/Convert_Object_to_String_(Java)
CC-MAIN-2015-48
refinedweb
3,597
52.39
22 - 28, 2004 Filter by week: 1 2 3 4 5 One Source, two destination, and with lookup Posted by RL at 4/28/2004 9:41:01 PM Hello Everyone I have this situation, 1. Read from Order file (txt approx 100 records 2. Look for the Customer Code in the Customer Table 3. If you find the record insert into Order tabl 4. Else Insert into Order Reject tabl Can any one guide me what's the best way to handle this. Because t... more >> Connections in DTS Package Posted by Joan Brown at 4/28/2004 8:33:18 PM We are using three servers (development, test/qa, and production). How can I code a connection in a package to reference the local server? We are using 'save as' to migrate from one server to another but we still have to modify the package on the destination server to have the correct... more >> parameter based on a query Posted by Dean at 4/28/2004 6:16:58 PM I need to create this Transform Data Task: SELECT ... FROM <TeradataDB> WHERE <TeradataDB>.customer IN (?) The parameter needs to be: SELECT DISTINCT customer FROM <localDB> The first problem is that the TeradataDB driver is ODBC, which does support parameters. The second problem is that... more >> help! scheduled package does nothing... Posted by Rea Peleg at 4/28/2004 4:53:20 PM I am totally lost on this one: I have a dts package which transforms data from tabula database into sql server through a propriatary odbc driver. When executed manually (through 'execute package') all is well, connection is made to both sides and data flows nicely. When I schedule this package... more >> DTS--> xp_cmdshell permission error Posted by Mohamadi at 4/28/2004 10:11:53 AM Hi All, I have several DTS jobs owned by several users. Now i am trying to put all the jobs in a procedure and schedule the same thru SQL-Agent. But it fails giving error --> EXECUTE permission denied on object 'xp_cmdshell', database 'master', owner 'dbo'. Any idea why is it happening... more >> Data from Oracle Posted by Michael V at 4/28/2004 5:41:03 AM Need to have access to data in an oracle base from visual basic. Ideally I would just connect to a database and get records from an oracle table using some kind of connection string However - I dont' think this can be done without the oracle client installed on the machine fro which i'm running ... more >> HELP! Running DTS in a batch file with multiple files Posted by troubleD at 4/27/2004 9:01:05 PM I have a command batch file that reads the names of csv files in a text file and then calls the DTS package every csv file.. My problem is that the loop on the batch file wont work... can you tell what is wrong with this code? csv files in DirList.txt: a.cs b.cs c.cs for /f "tokens=1,2,3" ... more >> Run VBA code Posted by Mtz at 4/27/2004 5:51:03 PM Hi Need to run VBA procedures as part of an ETL process (DTS). Can this be done? How Thansk in advance Regards MTZ... more >> Don't see what you're looking for? Search DevelopmentNow.com. Posted by no5pam NO[at]SPAM hotmail.com at 4/27/2004 3:26:21 PM I am getting the following error when attempting to CopyColumn between a fixed-width flat file and a nullable decimal column: "Error Source : Microsoft Data Transformation Services (DTS) Data Pump Error Description : The number of failing rows exceeds the maximum specified. TransformCopy 'D... more >> Importing a SQL7 Database Posted by jlc at 4/27/2004 2:32:55 PM I have a SQL MDF and LDF file I need to import into an existing instance of SQL7 server along with all stored procedures, etc. What is the best way to do that through DTF? ... more >> Assign SQL Script to global variable Posted by Ron Sissons at 4/27/2004 12:32:42 PM How do I assign the contents of a script.sql file to a global variable? Do I have to read it in line by line? Can you show me the code to do it? Ron Sissons, DBA Information Technology Services Riverside County Office of Education 3939 Thirteenth Street, Riverside, CA 92502-0868 Telephone... more >> Failing VB Commands For DTS Posted by Johnmichael Monteith at 4/27/2004 11:36:32 AM I am using a DTS import, and trying to get the sucker to determine which is the later date and use that for importing. I know I successfully did this before, but now can not re-create it. The error I get is: "Error Description: Invalid procedure call or argument: 'DTSSource' Error on Line 6... more >> Changing the SqlTask Via ActiveX Posted by Ron Sissons at 4/27/2004 8:26:05 AM Hi, I have a dts package that has 23 ExecuteSqlTasks. I know how to loop through each task. What I want to do is change the Sql Statement. The problem is the sql script I have is about 800 lines long. How do I assign the script.sql file into a global variable and then assign it to "oTask.custo... more >> DTS and DBF's Posted by Steve at 4/27/2004 8:12:45 AM I am using a DBF as a staging table in a DTS package. At the end (or anywhere in the package after the records have been imported into SQL) I need to delete all records from the DBF to ready the table for the next use. I cannot figure out how to do this. Any help would be appreciated.... more >> DTS Error EXCEPTION_ACCESS_VIOLATION Posted by Sandeep at 4/27/2004 6:41:43 AM We have some DTS jobs that runs every night called via a batch file. But since the past few days we are getting the following error. Error: -2147221499 (80040005); Provider Error: 0 (0) Error string: Need to run the object to perform this operation Error source: Microsoft Dat... more >> changing package owner Posted by jill johnson at 4/27/2004 6:41:03 AM Hi, I am a system administrator and I tried using sp_changeobjectowner to change the owner of a local dts package and it won't let me. Is there another way to change a local package owner in EM or QA? Thanks!... more >> Passing parameters in front-end Posted by Kaspian at 4/26/2004 11:31:03 PM Hi I am a newbie to DTS. What I am doing now is I have created a set of DTS packages, which are executed one from another. I am now preparing a frontend for the users to run the packages for themselves. I found examples on how to do that, but what I need to do is make the users pass parameters to ... more >> Order of Steps Posted by Ananth at 4/26/2004 9:06:03 PM Hello I have a DTS package with several steps and am trying to provide a front-end for the user which wil list the steps and show a progress bar on executing each step (very similar to the one provided by SQL Serve itself)...however, I do not seem to get the steps IN ORDER..FYI, I am using C# (W... more >> FTP using DTS disables line break in fixed format files?? Posted by cgseenu NO[at]SPAM rediffmail.com at 4/26/2004 5:39:59 PM Hi, I have been trying to ftp a fixed format file using DTS (SQL Server 2000). The file gets through, but the format of the file changes. If i have 30 records on the original file, The line break seems to disappear and the whole text appears in one line. If FTP the file using command prompt an... more >> Check if cancelled Posted by Brandon Lilly at 4/26/2004 12:49:34 PM Is there a way to check inside of an ActiveX task whether or not someone is trying to cancel the execution of the package? I noticed that the package doesn't actually cancel until a step completes. Thanks, Brandon ... more >> how to pass parameters to DTS from vbcode Posted by aparna at 4/26/2004 12:40:36 PM ... more >> PROBLEMS SCHEDULING DTS-PACKAGE Posted by Peter Häggmyr at 4/26/2004 9:34:13 AM I have an interesting poblem on a SQL Server. I right-click on a DTS-package and schedules it to run at a specific time. Normally the DTS-package appears in SQL Agent/Jobs, but on this server it does not. Does someone know how to schedule this package? //Peter ... more >> "Type Mismatch" calling GetExecutionErrorInfo Method from VBscript Posted by Niels Belonje at 4/26/2004 7:46:07 AM While looping trough te "Steps" collection of a 'Package-object' with the variable "objStep" the call in VBscript o "objStep.GetExecutionErrorInfo lngErrorCode , ErrSource, ErrDescription" results in an error message " Type mismatch: 'objStep.GetExecutionErrorInfo' Does anybody know what i... more >> Drivers doesn't appear in win2003 Posted by sch at 4/26/2004 7:46:04 AM Hello We just upgraded from win2k to win2003 on our sql server 2000 and we can't see the other drivers like excel,access conncetions in DTS packages is there a way we can load them how do they come they used to come default in win 2k all the packages which have connections to excel and access fai... more >> removing checks for DTS package Posted by werner kempler at 4/26/2004 6:31:07 AM We have a package that a few different people will be modifying. I don't want to make them all SA so is there a way to remove the ownership check so that they can all save their changes? ... more >> Passing a db name as a variable Posted by Peter at 4/26/2004 4:56:06 AM Is there a method for passing the database name as a variable to a DTS package We have all item databases named as the business date i.e. 20040426AllItem and I'm manually having to re-select the database to run the DTS packages as I haven't found a way of passing the db name (I've also tried to inc... more >> DTS Global Variable types Posted by mndr at 4/26/2004 3:26:07 AM Please, could anyone tell me how to pass NULL to the global variable of data type Date I need to start my DTS package from EM, but when i enter NULL into the window with global variable i receive error message "Could not convert variable startDate from type BSTR to type Date. What should i put the... more >> How to execute vb.net function in a dll file from DTS ? Posted by sadmick NO[at]SPAM hotmail.com at 4/26/2004 2:14:38 AM Hi, i want execute in my DTS a function .NET that has been define in a dll file ! Thanks Mick... more >> Import XML file into SQL Server Posted by Lena M at 4/25/2004 4:51:02 AM Hi Help Needed!!!!!!!!!! ASAP!!!!!!!! I need to import data from XML file into SQL Server Table Is there any way to do via DTS or Bulk Insert? May be something else. So far I find nothing. : Thanks!... more >> import excel spreadsheet to sql via vb or java script Posted by Fred at 4/24/2004 8:45:07 PM All, I am looking for a vb or java script that will delete an sql table named Daily_List, then import an excel spreadsheet from a network share (in a public folder) into sql server 2000 and name it Daily_List. a few things to note, the excel spreadsheet name changes daily based on the d... more >> Green Transfor Data Task Line Posted by anonymous NO[at]SPAM discussions.microsoft.com at 4/23/2004 10:16:10 PM Hi I have recently inherited a number of DTS packages. One of the packages contains a Transfer Data Task that has a Green arrow instead of the usual Grey one. Can anyone advise what the Green arrow indicates? Thanks ... more >> Workflow for a Transform Data Posted by anonymous NO[at]SPAM discussions.microsoft.com at 4/23/2004 10:14:07 PM All How do I error handle a Transform Data Task by using the worflow on failure goto as I would with any other step. Thansk... more >> Linked Server or DTS Package? Posted by Admin at 4/23/2004 7:44:02 PM Hi, I have a question? Linked Server or DTS Package? Which is more advantageous while importing data from one SQL Srver to another SQL Server?. If available please direct me to any article available on the internet. ... more >> Building DTS package through scripts? Posted by Kasp at 4/23/2004 5:17:33 PM Hello, I have made some DTS package that I need to replicate my DTS packages at another machine. How can I do this? I created these packages using Enterprise Manager of SQL Server 2000 Enterprise Edition. I understand that you can save the DTS packages a VB ..bas file, Structure file or in... more >> Data Transformation Posted by Simon Whale at 4/23/2004 3:08:15 PM All, what i am trying to do is as follows, i wish to import over 100 files into a database. i have decided to try this through DTS, but i need each file in their own table but i'll only know the names of the files on import. is there a way through DTS Data pump that i can dynamically supply... more >> Triggering a DTS job Posted by willa at 4/23/2004 3:05:12 PM I have a dts job that runs on my MSSQL2000 win2K box. But i want to be able to trigger this dts job to run when a three files are loaded into a certian directory. I can have trigger files uploaded at the same time but am just wondering what ways their are to acchive my goal? I dont wa... more >> DTS Package Reusability Posted by mike.schiewer NO[at]SPAM ssfhs.org at 4/23/2004 1:37:37 PM I am new to DTS. I have created a package containing several steps using Oracle as the source db. The database contains several instances of the same schema with different owners. A sample of a source query follows: SELECT HA.DEFENDANT.UID, HA.DEFENDANT.SEQUENCE, H... more >> Job status on Mainfram Posted by Obaid at 4/23/2004 10:58:10 AM We have this situation where we are trying to get a status from Mainframe/DB2 replication (copy) in order to start the load for SQL Server 2000. What it will do, it will provide us the green signal that load was successful (between MF and staging area) before next load. Using DataJoiner f... more >> Managing DTS layout Posted by rosdav007 NO[at]SPAM hotmail.com at 4/23/2004 7:48:49 AM Does someone know how to manage DTS layout (stpe postion,coordinates,...)? Obviously without using DTS designer.. Bye... more >> DTS / Interbase / Firebrid - Schedule Issue Posted by Neil at 4/23/2004 7:37:41 AM Has anyone ever successfully scheduled a DTS package to connect and extract data from an Interbase/Firebird database? My DTS package will run under SEM but fails when scheduled. Most of the time this would be linked to permissions in some way but I have very carefully setup a test envir... more >> Saving DTS package to Visual Basic File Posted by Stuart Hearn at 4/23/2004 4:36:02 AM I'm trying to save a DTS package i've created in DTS Designer to a Visual Basic File. The DTS package contains a 'Transfer Databases Task' which copies a database from one server to another. This tasks runs fine if i save it to SQL Server, but i need to save to VB If i try and save it to a VB file... more >> How to ... dynamic 'delta-upload' ? Posted by rholling at 4/23/2004 2:41:02 AM Hi there i want to implement a dynamic delta-upload Source is Oracle, Target SQL2000; I want to extract data from Oracle only for a couples of days - perhaps the last 3 days and not a full upload of the whole table. The starting date and the ending date of this range are stored in a table in the ... more >> How to call Webervice from DTS Posted by omg at 4/22/2004 3:56:07 PM What would be the best way to call a web service "" from a DTS package. I started to look at the VBScript and JScript but couldn't find a solution Any help would be appreciated. Thanks... more >> DTS and Stored Procedures with # tables Posted by shyam venkatram at 4/22/2004 2:52:11 PM Hello, I have a Stored Procedure that returns data from a # temp table. This proc works fine with query analyzer. When I try to run this query from a DTS package exporting the data to an excel , I cant seem to parse the query in the "Transform Data Task Properties" as it cannot find the #tempt... more >> Renaming DTS connnections via SQL Posted by Beema at 4/22/2004 11:54:20 AM I have numerous DTS's which import data from text files - there are around 30-40 text files sources. I have named them all with all capitals and would like to change them to all lowercase via an update statement in msdb, rather than doing them all one by one. Is this possible my running an ... more >> Rename DTS Package Posted by Zaki Baig at 4/22/2004 10:34:10 AM Hi! Can I rename a DTS Package. I have a big list of Packages with indisciplined naming convention. I am planning to rename all these packages by applying a proper naming convention. Can someone explain me how to rename the packages. I would not like to use the Save As option as it is ... more >> Error DTSRun.exe "Memory could not be written" Posted by Caro B at 4/22/2004 10:31:02 AM Hello, I've this problem (SQL2K SP3a) A DTS process that has been running for 3 o 4 month, yertarday didn't run. It invokes OLAP tasks, such the process of dimensions In the event log there was this message "dtsrun.exe - Application error : The instruction at XXXXX referenced memory at YYYYYY. T... more >> SQL Server Agent warning 208 Posted by Bob at 4/22/2004 10:16:27 AM I have a DTS package that suddenly failed after running successfully for over a month. SQL SP3 is installed for all instances. Any ideas would be helpful. Thanks Bob... more >> Error Handling in DTS Package - Overwrite log file? Posted by ktgillen at 4/22/2004 7:41:02 AM Hello I'm trying to perform package-level error handling within DTS. I would like to generate a log file when an error occurs within the package. I have been successful doing so, but do not like the fact that each time the package runs the file is appended to rather than being overwritten. Is ther... more >> Log Shipping for Developer Edition Posted by Lalit at 4/22/2004 12:06:02 AM Hi Friends I am having the Developer edition of SQL Server 2000. I need Log Shipping. From the books online i tried the procedures but could not configure. There is no proper sequence for configuration using T-SQL in books online Can any one please tell me how to configure the Log Shipping Wi... more >> · · groups Questions? Comments? Contact the d n
http://www.developmentnow.com/g/103_2004_4_22_0_0/sql-server-dts.htm
crawl-001
refinedweb
3,331
70.73
> 200458293201.rar > STREAM.ASM COMMENT# * Win2k.Stream --------------------------- * by Benny/29A and Ratter --------------------------- Let us introduce very small and simple infector presenting how to use features of NTFS in viruses. This virus loox like standard Petite-compressed PE file. However, it presents the newest way of PE file infecting method. How the virus worx? It uses streamz, the newest feature of NTFS filesystem and file compression, already implemented in old NTFS fs. ------------------------------------- * Basic principles of NTFS streamz ------------------------------------- How the file loox? Ya know that the file contains exactly the same what you can see when you will open it (e.g. in WinCommander). NTFS, implemented by Windows 2000, has new feature - the file can be divided to streamz. The content what you can see when you will open the file is called Primary stream - usually files haven't more than one stream. However, you can create NEW stream ( = new content) in already existing file without overwritting the content. Example: addressing of primary stream -> e.g. "calc.exe" addressing of other streamz -> : e.g. "calc.exe:stream" If you have NTFS, you can test it. Copy to NTFS for instance "calc.exe", and then create new file "calc.exe:stream" and write there "blahblah". Open "calc.exe". Whats there? Calculator ofcoz. Now open "calc.exe:stream". Whats there? "blahblah", the new file in the old one :) Can you imagine how useful r streamz for virus coding? The virus infects file by moving the old content to the new stream and replacing the primary stream with virus code. File (calc.exe) before infection: -Calc.exe---------------------------- rimary stream (visible part) Calculator ------------------------------------- File (calc.exe) after infection: -Calc.exe---------------------------- rimary stream (calc.exe) Next stream (calc.exe:STR) Virus Calculator ------------------------------------- Simple and efficent, ain't it? --------------------- * Details of virus --------------------- * The virus infects all EXE files in actual directory. * The virus uses as already-infected mark file compression. All infected files are compressed by NTFS and virus then does not infect already compressed files. Well, almost all files after infection r smaller than before, so user won't recognize virus by checking free disk space :) * If user will copy the infected file to non-NTFS partition (in this case only primary stream is copied), the host program will be destroyed and instead of running host program virus will show message box. That can be also called as payload :P * The virus is very small, exactly 3628 bytes, becoz it's compressed by Petite 2.1 PE compression utility (). * The disinfection is very easy - just copy the content of :STR to and delete :STR. If you want to create sample of infected file, then just copy the virus to some file and copy any program (host program) to :STR. Thats all! However, AVerz have to rebuild their search engine to remove this virus, becoz until now, they had no fucking idea what are streamz :) * This virus was coded in Czech Republic by Benny/29A and Ratter, on our common VX meeting at Ratter's city... we just coded it to show that Windows 2000 is just another OS designed for viruses... it really is :) * We would like to thank GriYo for pointing us to NTFS new features. The fame is also yourz, friend! ---------------- * In the media ---------------- AVP's description: This is the first known Windows virus using the "stream companion" infection method. That method is based on an NTFS feature that allows to create multiple data streams associated with a file. *NTFS Streams* Each file contains at least one default data stream that is accessed just by content is also read from the default stream. Additional file streams may contain any data. The streams cannot be accessed or modified without reference to the file. When the file is deleted, its streams are deleted as well; if the file is renamed, the streams follow its new name. In the Windows package there is no standard tool to view/edit file streams. To "manually" view file streams you need to use special utilities, for instance the FAR utility with the file steams support plug-in (Ctrl-PgDn displays file streams for selected file). *Virus Details* The virus itself is a Windows application (PE EXE file) compressed using the Petite PE EXE file compressor and is about 4K in size. When run it infects all EXE files in the current directory and then returns control to the host file. If any error occurs, the virus displays the message: Win2k.Stream by Benny/29A & Ratter This cell has been infected by [Win2k.Stream] virus! While infecting a file the virus creates a new stream associated with the victim file. That stream has the name "STR", i.e. the complete stream name is "FileName:STR". The virus then moves the victim file body to the STR stream (default stream, see above) and then overwrites the victim file body (default stream) with its (virus) code. As a result, when an infected file is executed Windows reads the default stream (which is overwritten by virus code) and executes it. Also, Windows reports the same file size for all infected files - that is the virus length. To release control to the host program the virus just creates a new process by accessing the original file program using the name "FileName:STR". That infection method should work on any NTFS system, but the virus checks the system version and runs only under Win2000. AVP's press release: *A New Generation of Windows 2000 Viruses is Streaming Towards PC Users* Moscow, Russia, September 4, 2000 ?Kaspersky Lab* File before infection File after infection ." In MSNBC's news: *New trick can hide computer viruses* *But experts question danger posed by stream technology* Sept. 6. THE VIRUS, CALLED W2K.STREAM, poses little threat it was written as a relatively benign roof of concept.But, according to a source who requested anonymity, it was posted on several virus writer Web sites over Labor Day weekend ?making copycats possible. The virus takes advantage of a little-used feature included in Windows 2000 and older Windows NT systems that allows programs to be split into pieces called streams. Generally, the body of a program resides in the main stream. But other streams can be created to store information related to what in the main stream. Joel Scambray, author of acking Exposed,described these additional streams as lost-it notes attached to the main file. The problem is that antivirus programs only examine the main stream. W2K.Stream demonstrates a programmer ability to create an additional stream and hide malicious code there. ertainly, this virus begins a new era in computer virus creation,said Eugene Kaspersky, Head of Anti-Virus Research at Kaspersky Lab, in a press release. he stream Companion technology the virus uses to plant itself into files makes its detection and disinfection extremely difficult to complete. *THIS BUG ISN DANGEROUS* No W2K.stream infections have been reported, and experts done believe the virus is in the wild circulating on the Internet yet. At any rate, this virus actually makes things easy for antivirus companies. If a user is infected, the program creates an alternate stream and places the legitimate file in this alternate location; the virus replaces it as the main stream. That makes detection by current antivirus products easy. But future viruses could do just the opposite, evading current antivirus products. One antivirus researcher who requested anonymity called release of the bug omewhat akin to the first macro virus.He added that reengineering antivirus software to scan for multiple streams would be a complicated effort. this case, many anti-virus products will become obsolete, and their vendors will be forced to urgently redesign their anti-virus engines, Kaspersky said. *AN OLD ISSUE* - There is nothing new about the potential of exploiting the multiple stream issue; Scambray hints at the problem in the book acking Exposed,and described it even more explicitly in a 1998 Infoworld.com article. The SANS Institute, a group of security researchers, issued an lert criticizing antivirus companies for not updating their products to scan the contents of any file stream earlier. found that the scanners were incapable of identifying viruses stored within an alternate data stream,the report said. or example if you create the file MyResume.doc:ILOVEYOU.vbs and store the contents of the I Love You virus within the alternate data stream file, none of the tested virus scanners were capable of finding the virus during a complete disk scan. But some antivirus companies described the threat as minimal because the alternate stream trick only hides the bug while it stored on a victim computer. Pirkka Palomaki, Director of Product Marketing for F-Secure Corp., said for the virus to actually run, it has to come out of hiding and load into main memory. it would be detected as it tried to activate, Palomaki said. but this signifies importance of real-time protection. He added the virus would still have to find its way onto a victim computer; and that victim would have to be tricked into installing the virus using one of the traditional methods, such as clicking on an infected e-mail attachment. it could increase the ability to for scanners to miss something, said Pat Nolan, virus researcher at McAfee Corp. but we are on top of it. If there is a vulnerability, it will be short-lived. ----------------------- * How to compile it? ----------------------- Use Petite version 2.1 (). tasm32 /ml /m9 /q stream tlink32 -Tpe -c -x -aa stream,,,import32 pewrsec stream.exe petite -9 -e2 -v1 -p1 -y -b0 -r* stream.exe And here comes the virus source... /# .586p .model flat,stdcall include win32api.inc ;include filez include useful.inc extrn ExitProcess:PROC ;used APIz extrn VirtualFree:PROC extrn FindFirstFileA:PROC extrn FindNextFileA:PROC extrn FindClose:PROC extrn WinExec:PROC extrn GetCommandLineA:PROC extrn GetModuleFileNameA:PROC extrn DeleteFileA:PROC extrn ReadFile:PROC extrn CopyFileA:PROC extrn WriteFile:PROC extrn CreateFileA:PROC extrn CloseHandle:PROC extrn MessageBoxA:PROC extrn GetFileSize:PROC extrn VirtualAlloc:PROC extrn DeviceIoControl:PROC extrn GetFileAttributesA:PROC extrn GetTempFileNameA:PROC extrn CreateProcessA:PROC extrn GetVersion:PROC FSCTL_SET_COMPRESSION equ 9 shl 16 or 3 shl 14 or 16 shl 2 ;compression flag STARTUPINFO STRUCT ;used by CreateProcessA API cb DWORD ? lpReserved DWORD ? lpDesktop DWORD ? lpTitle DWORD ? dwX DWORD ? dwY DWORD ? dwXSize DWORD ? dwYSize DWORD ? dwXCountChars DWORD ? dwYCountChars DWORD ? dwFillAttribute DWORD ? dwFlags DWORD ? wShowWindow WORD ? cbReserved2 WORD ? lpReserved2 DWORD ? hStdInput DWORD ? hStdOutput DWORD ? hStdError DWORD ? STARTUPINFO ENDS PROCESS_INFORMATION STRUCT hProcess DWORD ? hThread DWORD ? dwProcessId DWORD ? dwThreadId DWORD ? PROCESS_INFORMATION ENDS @pushvar macro variable, empty ;macro for pushing variablez local next_instr ifnb %out too much arguments in macro '@pushvar' .err endif call next_instr variable next_instr: endm .data extExe db '*.exe',0 ;search mask fHandle dd ? ;file search handle file_name db MAX_PATH dup(?) ;actual program name db MAX_PATH dup(?) file_name2 db MAX_PATH dup(?) ;temprorary file db 4 dup (?) WFD WIN32_FIND_DATA ? ;win32 find data proc_info PROCESS_INFORMATION <> ;used by CreateProcessA startup_info STARTUPINFO <> ;... .code Start: ;start of virus call GetVersion ;get OS version cmp al,5 ;5 = Win2000 jnz msgBox ;quit if not Win2000 mov edi,offset file_name push MAX_PATH push edi push 0 call GetModuleFileNameA ;get path+filename of actual ;program push offset WFD push offset extExe call FindFirstFileA ;find first file to infect test eax,eax jz end_host mov [fHandle],eax ;save handle search_loop: call infect ;try to infect file push offset WFD push dword ptr [fHandle] call FindNextFileA ;try to find next file test eax,eax jne search_loop ;and infect it push dword ptr [fHandle] call FindClose ;close file search handle end_host: mov esi,offset file_name ;get our filename push esi @endsz dec esi mov edi,esi mov eax,"RTS:" ;append there :"STR" stream stosd ;name pop esi call GetCommandLineA ;get command line xchg eax,edi ;to EDI ;esi - app name ;edi - cmd line xor eax,eax push offset proc_info push offset startup_info push eax push eax push eax push eax push eax push eax push edi push esi call CreateProcessA ;jump to host code xchg eax,ecx jecxz msgBox ;if error, show message box end_app: push 0 call ExitProcess ;exit msgBox: push 1000h ;show some lame msg box :) @pushsz "Win2k.Stream by Benny/29A & Ratter" ;copyleft :] @pushsz "This cell has been infected by [Win2k.Stream] virus!" push 0 ;with name of virus and authorz call MessageBoxA jmp end_app infect: push offset [WFD.WFD_szFileName] call GetFileAttributesA ;check if the file is NTFS test eax,800h ;compressed = already infected jz next_infect ret ;quit then next_infect: push offset [WFD.WFD_szFileName] mov byte ptr [flagz],OPEN_EXISTING call Create_File ;open found program jz infect_end xor eax,eax push eax @pushvar push eax push eax push 4 @pushvar ;default compression push FSCTL_SET_COMPRESSION push ebx ;NTFS compress it = call DeviceIoControl ;mark as already infected ; = and save disk space :) push ebx call CloseHandle ;close file handle mov esi,offset file_name2 push esi push 0 @pushsz "str" @pushsz "." call GetTempFileNameA ;create name for temp file test eax,eax jz infect_end mov edi,offset [WFD.WFD_szFileName] push 0 push esi push edi call CopyFileA ;copy there victim program test eax,eax jz infect_end push 0 push edi push offset file_name call CopyFileA ;copy ourself to victim program push esi mov esi,edi @endsz xchg esi,edi dec edi mov eax,"RTS:" ;append :"STR" stream to stosd ;victim program filename xor al,al stosb call Create_File ;open victim file jz infect_end push 0 push ebx call GetFileSize ;get its size xchg eax,edi push PAGE_READWRITE push MEM_COMMIT or MEM_RESERVE push edi push 0 call VirtualAlloc ;allocate enough memory test eax,eax ;for file content jz infect_end_handle xchg eax,esi xor eax,eax push eax @pushvar push edi push esi push ebx call ReadFile ;read file content to test eax,eax ;allocated memory jz infect_end_handle push ebx call CloseHandle ;close its file handle push offset file_name2 call DeleteFileA ;delete temporary file mov byte ptr [flagz],CREATE_ALWAYS push offset [WFD.WFD_szFileName] call Create_File ;open stream jz infect_end_dealloc push 0 mov ecx,offset file_size push ecx push dword ptr [ecx] push esi push ebx call WriteFile ;write there victim program test eax,eax jz infect_end_handle infect_end_handle: push ebx call CloseHandle ;close its file handle infect_end_dealloc: push MEM_DECOMMIT push dword ptr [file_size] push esi call VirtualFree ;free allocated memory push MEM_RELEASE push 0 push esi call VirtualFree ;release reserved part of mem infect_end: ret ; [esp+4] - file_name Create_File: ;proc for opening file xor eax,eax push eax push eax db 6ah flagz db OPEN_EXISTING ;variable file open flag push eax push eax push GENERIC_READ or GENERIC_WRITE push dword ptr [esp+1ch] call CreateFileA ;open file xchg eax,ebx ;handle to EBX inc ebx ;is EBX -1? lahf ;store flags dec ebx ;correct EBX sahf ;restore flags retn 4 ;quit from proc end Start ;end of virus
http://read.pudn.com/downloads101/sourcecode/asm/414658/200458293201/STREAM.ASM__.htm
crawl-002
refinedweb
2,450
60.85
Hi All, Pydev 1.5.4 has been released Details on Pydev: Details on its development: Release Highlights: ------------------------------- * Actions: o Go to matching bracket (Ctrl + Shift + P) o Copy the qualified name of the current context to the clipboard. o Ctrl + Shift + T keybinding is resolved to show globals in any context (note: a conflict may occur if JDT is present -- it can be fixed at the keys preferences if wanted). o Ctrl + 2 shows a dialog with the list of available options. o Wrap paragraph is available in the source menu. o Globals browser will start with the current word if no selection is available (if possible). * Templates: o Scripting engine can be used to add template variables to Pydev. o New template variables for next, previous class or method, current module, etc. o New templates for super and super_raw. o print is now aware of Python 3.x or 2.x * Code analysis and code completion: o Fixed problem when getting builtins with multiple Python interpreters configured. o If there's a hasattr(obj, 'attr), 'attr' will be considered in the code completion and code analysis. o Fixed issue where analysis was only done once when set to only analyze open editor. o Proper namespace leakage semantic in list comprehension. o Better calltips in IronPython. o Support for code-completion in Mac OS (interpreter was crashing if _CF was not imported in the main thread). * Grammar: o Fixed issues with 'with' being used as name or keyword in 2.5. o Fixed error when using nested list comprehension. o Proper 'as' and 'with' handling in 2.4 and 2.5. o 'with' statement accepts multiple items in python 3.0. * Improved hover: o Showing the actual contents of method or class when hovering. o Link to the definition of the token being hovered (if class or method). * Others: o Completions for [{( are no longer duplicated when on block mode. o String substitution can now be configured in the interpreter. o Fixed synchronization issue that could make Pydev halt. o Fixed problem when editing with collapsed code. o Import wasn't found for auto-import location if it import started with 'import' (worked with 'from') o Fixed interactive console problem with help() function in Python 3.1 o NullPointerException fix in compare editor. -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
https://mail.python.org/pipermail/python-list/2010-January/564945.html
CC-MAIN-2016-40
refinedweb
392
67.55
openpyxl - A Python library to read/write Excel 2007 xlsx/xlsm files¶ Introduction¶ OpenPyxl is a Python library to read/write Excel 2007 xlsx/xlsm files. It was born from lack of existing library to read/write natively from Python the new Office Open XML format. All kudos to the PHPExcel team as openpyxl is a Python port of PHPExcel Sample code:¶ from openpyxl import Workbook wb = Workbook() # grab the active worksheet ws = wb.active # Data can be, or there is a high probability your work will not be taken into consideration). There are plenty of examples in the /test directory Tools This is an open-source project, maintained by volunteers on their spare time, so while we try to work on this project as often as possible, sometimes life gets in the way. Please be patient..4, so if it does not work on your environment, let us know :-) Installation¶ The best method to install openpyxl is using a PyPi client such as easy_install (setuptools) or pip. It is advisable to do this in a Python virtualenv without system packages: $ pip install openpyxl or $ easy_install openpyxl Note To install from sources (there is nothing to build, openpyxl is 100% pure Python), you can download an archive from bitbucket (look in the “tags” tab). There is support for the popular lxml library which will be used if it is installed. After extracting the archive, you can do: $ python setup.py install Warning To be able to include images (jpeg,png,bmp,...) into an openpyxl file, you will also need the ‘PIL’ library that can be installed with: $ pip install pillow or browse, pick the latest version and head to the bottom of the page for Windows binaries. Getting the source¶ Source code is hosted on bitbucket.org. You can get it using a Mercurial client and the following URLs: - $ hg clone -r 2.1.2 or to get the latest development version: - $ hg clone Usage examples¶ Tutorial¶ Cookbook¶ Charts¶ Read/write large files¶ Working with styles¶ Conditional Formatting¶ Data Validation¶ API Documentation¶ Indices and tables¶ Release Notes¶ - 2.1.2 (unreleased) - 2.1.1 (2014-10-08) - 2.1.0 (2014-09-21) - 2.0.5 (2014-08-08) - 2.0.4 (2014-06-25) - 2.0.3 (2014-05-22) -)
http://openpyxl.readthedocs.org/en/latest/
CC-MAIN-2014-42
refinedweb
381
62.48
01 June 2011 11:34 [Source: ICIS news] LONDON (ICIS)--BASF has declared force majeure on supplies of butanediol (BDO) from its site at Ludwigshafen, Germany, following a fire on 30 May in a precursor plant, a company source said on Wednesday. BASF, which produces 190,000 tonnes/year of BDO at Ludwigshafen, declared force majeure on 31 May after the fire disrupted the production of the feedstock acetylene, the source said. The BDO plant is now idle and technicians are investigating the situation, the source added. Supplies of tetrahydrofuran (THF), polytetrahydrofuran (PTHF) and n-methylpyrrolidone have also been affected, the source said. In Europe, demand for BDO in the second quarter has been strong. Sources say offtake is healthier than expected, and requests for additional material are being denied because of limited availability. Second-quarter European BDO contract prices rose by €200/tonne ($290/tonne) from the first quarter to settle at €2,050–2,100/tonne FD (free delivered) NWE (northwest ?xml:namespace> Global BDO supply is expected to tighten further as a result of the unplanned outage at Ludwigshafen. BASF’s production of polybutylene terephthalate (PBT) at the site was not affected by the fire, the source said. ($1 = €0.69) For more on butanediol
http://www.icis.com/Articles/2011/06/01/9464995/basf-declares-force-majeure-on-bdo-from-ludwigshafen.html
CC-MAIN-2014-42
refinedweb
208
51.89
Bungholio's Barnyard Brawl 4 This page contains the info for "Bungy's Barnyard Brawl 4 empire game. EMPIREHOST: sheepfarm.game-host.org EMPIREPORT: 6666 Game Opened: 20:37 (EST) SUN 25 NOV 2007 First Update: 22:30 (EST) THU 29 NOV 2007 Game Ended: 22:30 (EST) SUN 30 DEC 2007 G ame I nfo : Page Last Updated:9pm 31JAN2008 BBB4 was an informal style game. It was pre-set at 25-Updates long, with daily updates at 10:30PM (EST). BBB4 had a fixed-incremental tech theme, where all countries gained the exact same amount of tech, and they could not produce any tech of their own. This allowed countries to focus their attention on fighting. - = [ Empire - Final Power Report ] = - as of Sun Dec 30 22:30:01 2007 sects eff civ mil shell gun pet iron dust oil pln ship unit money RingTone 369 97% 228K 12K 5.6K 599 11K 17K 2.7K 3.6K 407 107 85 69K WelshStar 261 84% 135K 14K 3.8K 306 10K 12K 1.1K 2.0K 69 58 55 50K CowPie 224 91% 129K 14K 2.0K 453 17K 5.3K 862 2.3K 195 61 81 45K Zern 135 97% 85K 7.6K 4.2K 519 7.8K 6.2K 527 1.2K 120 36 60 76K blueshift 111 89% 68K 9.6K 3.8K 537 2.7K 10K 219 1.4K 45 48 40 68K Hate 62 100% 38K 5.8K 2.5K 222 4.4K 4.9K 518 1.2K 62 33 35 96K Zentradi 29 34% 5.7K 894 69 42 1.0K 747 2.5K 249 20 1 15 88K SPIKE 27 88% 13K 1.9K 230 24 110 182 274 319 0 26 16 -876 LKHU 2 55% 559 112 44 29 0 0 0 0 0 19 0 23K Ichthys 3 100% 386 452 221 47 398 0 419 0 0 13 0 12K Hacksaw 0 0% 0 3 0 0 0 0 0 0 0 2 0 28K Norway 0 0% 0 0 0 0 0 0 0 0 0 0 0 15K ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- worldwide 1.2K 91% 704K 67K 23K 2.8K 55K 57K 9.2K 12K 918 404 387 570K Report: Sun Dec 30 22:30:01 2007 # name tech research education happiness 1 CowPie 83.75 0.01 0.00 5.34 2 WelshStar 83.75 0.01 0.00 0.00 3 Hate 83.75 0.01 0.00 0.00 4 Zern 83.75 0.01 0.00 0.00 5 Ichthys 83.75 0.01 0.00 0.43 6 RingTone 83.75 0.01 0.00 10.25 7 blueshift- 83.75 0.01 0.00 10.42 8 Zentradi 83.75 0.01 0.00 0.00 11 LKHU 83.75 0.01 0.00 0.00 13 Hacksaw 83.75 0.01 0.00 0.00 14 Norway 83.75 0.01 0.00 0.01 15 SPIKE 83.75 0.01 0.00 11.41 VERSION REPORT : Wolfpack Empire 4.3.10 The following parameters have been set for this game: World size is 128 by 80. There can be up to 99 countries. By default, countries use their own coordinate system. Use the 'show' command to find out the time of the next update. The current time is Tue Jan 08 20:34:35. An update consists of 60 empire time units. Each country is allowed to be logged in 1200 minutes a day. It takes 8.33 civilians to produce a BTU in one time unit. A non-aggi, 100 fertility sector can grow 0.12 food per etu. 1000 civilians will harvest 1.3 food per etu. 1000 civilians will give birth to 3.3 babies per etu. 1000 uncompensated workers will give birth to 2.5 babies. In one time unit, 1000 people eat 0.5 units of food. 1000 babies eat 6.0 units of food becoming adults. No food is needed! Banks pay $250.00 in interest per 1000 gold bars per etu. 1000 civilians generate $8.33, uncompensated workers $0.17 each time unit. 1000 active military cost $41.67, reserves cost $4.17. Up to 95 avail can roll over an update. Happiness p.e. requires 1 happy stroller per 10000 civ. Education p.e. requires 1 class of graduates per 10000 civ. Happiness is averaged over 192 time units. Education is averaged over 192 time units. The technology/research boost you get from the world is 40.00%. Nation levels (tech etc.) decline 1% every -20 time units. Tech Buildup is limited to logarithmic growth (base 2.33) after 1.00. SectorsShipsPlanesUnits Maximum mobility12712712075 Max mob gain per update60906045 Mission mobility cost--000 Max eff gain per update--100100100 Maintenance cost per update--6.0%6.0%6.0% Max interdiction range88--8 The maximum amount of mobility used for land unit combat is 5.00. Ships on autonavigation may use 6 cargo holds per ship. Fire ranges are scaled by 0.75. Flak damage is scaled by 1.75. Torpedo damage is 2d45+43. The attack factor for para & assault troops is 0.50. 12% of fallout leaks into each surrounding sector. Fallout decays by 28% per update Damage toSpills to Sector People Mater. Ships Planes LandU. Sector --100%100% 0% 4% 30% People 10% -- -- -- -- -- Materials 10% -- -- -- -- -- Efficiency 10% -- -- -- -- -- Ships 10%100%100% -- 0% 0% Planes 10% 0% 0% -- -- -- Land units 10% 0%100% -- 0% 0% You can have at most 640 BTUs. You are disconnected after 15 minutes of idle time. Options enabled in this game: ALL_BLEED, BRIDGETOWERS, EASY_BRIDGES, FALLOUT, GODNEWS, GO_RENEW, INTERDICT_ATT, LANDSPIES, NOFOOD, NOMOBCOST, NO_PLAGUE, PINPOINTMISSILE, SAIL, SHOWPLANE, TECH_POP Options disabled in this game: AUTO_POWER, BLITZ, FUEL, GUINEA_PIGS, HIDDEN, LOANS, LOSE_CONTACT, MARKET, MOB_ACCESS, NO_FORT_FIRE, RES_POP, SLOW_WAR, SUPER_BARS, TRADESHIPS, TREATIES See "info Options" for a detailed list of options and descriptions. The person to annoy if something goes wrong is: Bungholio (empire_bungholio @verizon.net or @hotmail.com). Here's how the Tech negative-decay worked out : Upd Tech 0 40.00 1 41.20 2 42.44 3 43.71 4 45.02 5 46.37 6 47.76 7 49.19 8 50.67 9 52.19 10 53.76 11 55.37 12 57.03 13 58.74 14 60.50 15 62.32 16 64.19 17 66.11 18 68.10 19 70.14 20 72.24 21 74.41 22 76.64 23 78.94 24 81.31 25 83.75 Here is the basics of "info BBB4" The Ships/Units/Planes/Sectors/Nukes have been modded quite substantially from the standard gamebase. Here are the basics of the mods (other than the obviousness of them being totally re-tech'd) : SHIPS: FRG/TT - firing ranges increased to 2 (from 1) SS/OE/OD - removed, these added micromanagement GS - added, with more speed and cheaper than PZ7 PT - torp removed. CAL - number of planes/choppers mod'd ECA - escort light carrier - added, with reduced capabilities, and fires a little at subs under it DD - has reduced firing range CAR - this is the supercarrier, it carries a lot of stuff, but cannot fit into a canal SH - hunts subs well BBC - is the super battleship with lots of firepower but it cannot fit into canal sectors. UNITS: SHP - shepherd unit added, but lower cost and less effectiveness than PZ7 SPY/COM - reduced costs for these units PONY - slightly better attack power than in PZ7 H1B - these engineers are homebound and don't have much strength MAR - low tech HAR - faster speed and more attack power PLANES: check att/def/ranges for things, as these have been mod'd to fit in new tech cycle. F3 - added Missiles - now within tech range HB - lots of load for carrying 'heavy things' Choppers - Buildable in the tech 60's Nukes: See for yourself, these puppies start at tech 70. Everyone will get some. Until you reach tech 74, all nukes will need to be delivered via HB. After that, some missiles will carry some nukes to their final destination. Sectors: ~/^/=/@/z - all hold 200 civs for max pop z - canal sector, allows ships to enter/exit/load/unload at 2+% = - BridgeSpans need tech 45 @ - Towers need tech 60 ~ - reduced mobility (was 0.4/0.2 now 0.3/0.1) l/t/r - not player designatable - this is a semi-fixed tech game (where you increase in tech by 3% per update) dust only in mountains, plains sectors produce oil. Commands: convert - converting enemy civs to UWs has been disabled. Your initial starting UW's have been removed. Batteries not included either. RTFM. SHIPS : Command : show s b 85 Printing for tech level '85' lcm hcm avail tech $ fb fishing boat 25 15 75 0 $180 frg frigate 30 30 110 0 $600 tt troop transport 50 50 170 10 $800 cs cargo ship 60 40 160 20 $500 os ore ship 60 40 160 20 $500 gs gun ship 25 35 115 25 $500 tk tanker 60 40 160 35 $600 pt patrol boat 20 10 60 40 $300 ms minesweeper 25 15 75 40 $400 lc light cruiser 30 40 130 45 $800 ft fishing trawler 25 15 75 45 $300 sb submarine 30 30 110 47 $650 hc heavy cruiser 40 50 160 48 $1200 cal light carrier 50 60 190 50 $2000 bb battleship 50 70 210 52 $1800 eca escort carrier 40 50 160 52 $1500 dd destroyer 30 30 110 54 $550 car aircraft carrier 60 70 220 55 $2500 sh sub hunter 30 30 110 60 $600 bbc battlecruiser 60 80 240 64 $2500 ls landing ship 60 40 160 75 $1000 Command : show s c 85 Printing for tech level '85' cargos & capabilities fb fishing boat 300c 10m 900f 15u fish canal frg frigate 60m 10s 2g 60f semi-land canal tt troop transport 120m 20s 4g 120f semi-land canal cs cargo ship 600c 50m 300s 50g 900f 1400l 900h 250u supply canal os ore ship 30c 5m 990i 990d 200f 45u 990r canal gs gun ship 25m 30s 3g 50f canal tk tanker 30c 5m 990p 200f 990o 25u oiler supply canal pt patrol boat 2m 12s 2g 5f canal ms minesweeper 10m 100s 1g 90f mine sweep canal lc light cruiser 100m 40s 5g 100f mine canal ft fishing trawler 300c 10m 900f 15u fish canal sb submarine 25m 36s 5g 80f torp sonar mine sub canal hc heavy cruiser 120m 100s 8g 200f canal cal light carrier 150m 200s 2g 200p 180f plane canal bb battleship 200m 200s 10g 900f canal eca escort carrier 100m 100s 2g 100p 120f dchrg plane canal dd destroyer 60m 40s 4g 80f dchrg sonar mine canal car aircraft carrier 250m 300s 4g 400p 900f plane sh sub hunter 60m 40s 4g 80f dchrg sonar mine canal bbc battlecruiser 300m 300s 15g 400f semi-land ls landing ship 400m 10s 1g 300f land canal Command : show s s 85 Printing for tech level '85' s v s r f l p h x p i p n i n l e p def d s y g r d n l l fb fishing boat 12 12 12 2 0 0 0 0 0 0 frg frigate 60 31 20 3 2 1 2 0 0 1 tt troop transport 70 24 28 3 2 2 2 0 0 1 cs cargo ship 22 29 29 3 0 0 2 0 0 1 os ore ship 22 29 29 3 0 0 0 0 0 1 gs gun ship 44 31 25 4 6 2 0 0 0 0 tk tanker 79 27 38 3 0 0 0 0 0 1 pt patrol boat 10 40 8 2 1 1 0 0 0 0 ms minesweeper 10 26 12 2 0 0 0 0 0 0 lc light cruiser 50 31 26 5 6 3 2 0 0 1 ft fishing trawler 10 25 13 2 0 0 0 0 0 0 sb submarine 25 20 4 4 3 3 0 0 0 0 hc heavy cruiser 70 30 26 5 8 4 4 0 0 1 cal light carrier 60 30 35 5 2 2 0 15 10 5 bb battleship 95 26 30 6 10 7 2 0 0 1 eca escort carrier 50 30 31 4 1 1 0 10 5 0 dd destroyer 45 35 17 4 3 3 1 0 0 1 car aircraft carrier 80 32 37 7 3 3 0 25 20 10 sh sub hunter 50 35 13 4 6 3 0 0 0 0 bbc battlecruiser 105 28 40 7 12 8 1 0 0 0 ls landing ship 40 30 28 2 0 0 6 0 0 2 PLANES : Command : show p b 85 Printing for tech level '85' lcm hcm crew avail tech $ f1 Sopwith Camel 8 2 1 32 45 $400 lb TBD-1 Devastator 10 3 1 36 47 $550 f2 P-51 Mustang 8 2 1 32 50 $400 zep Zeppelin 6 2 3 30 51 $750 as anti-sub plane 10 3 2 36 55 $550 ssm V2 15 15 0 65 55 $500 f3 F8F BeerCat 8 3 1 34 55 $600 mb medium bomber 14 5 3 44 56 $1000 tr C-56 Lodestar 14 5 3 44 60 $1000 es P-38 Lightning 9 3 1 35 62 $700 tc transport chopper 8 2 2 32 63 $700 srbm Atlas 20 20 0 80 65 $1000 hb B-26B Marauder 20 6 2 52 65 $1200 nc AH-1 Cobra 8 2 2 32 66 $800 ac AH-64 Apache 8 2 2 32 68 $800 jf1 F-4 Phantom 12 4 2 40 70 $1000 irbm Titan 25 25 0 95 74 $1500 icbm Minuteman 30 30 0 110 76 $2500 sam Sea Sparrow 3 1 0 25 78 $200 Command : show p c 85 Printing for tech level '85' capabilities f1 Sopwith Camel tactical intercept VTOL lb TBD-1 Devastator bomber tactical VTOL light f2 P-51 Mustang tactical intercept light zep Zeppelin tactical cargo VTOL spy as anti-sub plane tactical ASW mine sweep ssm V2 tactical VTOL missile f3 F8F BeerCat tactical intercept light mb medium bomber bomber tactical tr C-56 Lodestar cargo para es P-38 Lightning tactical escort tc transport chopper cargo VTOL light helo para srbm Atlas tactical VTOL missile hb B-26B Marauder bomber nc AH-1 Cobra tactical VTOL helo ASW sweep ac AH-64 Apache tactical VTOL helo jf1 F-4 Phantom tactical intercept light irbm Titan tactical VTOL missile icbm Minuteman tactical VTOL missile sam Sea Sparrow intercept VTOL missile light x-light Command : show p s 85 Printing for tech level '85' acc load att def ran fuel stlth f1 Sopwith Camel 78 1 4 4 9 1 0% lb TBD-1 Devastator 43 2 0 4 12 1 0% f2 P-51 Mustang 70 1 5 5 13 1 0% zep Zeppelin 53 3 0 -2 20 2 0% as anti-sub plane 75 2 0 4 19 2 0% ssm V2 53 3 0 4 8 0 0% f3 F8F BeerCat 84 1 7 7 13 1 0% mb medium bomber 40 4 0 6 18 3 0% tr C-56 Lodestar 0 7 0 3 19 3 0% es P-38 Lightning 54 1 6 6 19 2 0% tc transport chopper 0 3 0 3 11 2 40% srbm Atlas 54 6 0 6 13 0 0% hb B-26B Marauder 81 8 0 5 21 5 0% nc AH-1 Cobra 50 2 0 3 11 2 0% ac AH-64 Apache 22 1 0 4 11 2 40% jf1 F-4 Phantom 41 1 12 12 14 3 0% irbm Titan 56 8 0 10 18 0 0% icbm Minuteman 56 10 0 15 38 0 0% sam Sea Sparrow 0 0 0 15 3 0 0% LAND UNITS : Command : show l b 85 Printing for tech level '85' lcm hcm guns avail tech $ shp shepherd 10 10 0 50 15 $250 cav cavalry 10 5 0 40 30 $500 art artillery 20 10 0 60 35 $800 spy infiltrator 10 5 0 40 40 $500 linf light infantry 8 4 0 36 40 $300 tra train 100 50 0 220 40 $3500 inf infantry 10 5 0 40 45 $500 sup supply 10 5 0 40 50 $500 lat lt artillery 20 10 0 60 55 $500 com commando 10 5 0 40 55 $1000 pony ponies 20 10 0 60 57 $550 aau aa unit 20 10 0 60 58 $500 hat hvy artillery 40 20 0 100 60 $800 h1b h1b engineer 10 5 0 40 64 $1500 mar marines 10 5 0 40 65 $1000 har hvy armor 20 10 0 60 65 $500 eng engineer 10 5 0 40 70 $3000 lar lt armor 10 5 0 40 71 $600 Command : show l c 85 Printing for tech level '85' capabilities shp shepherd 10m 15f recon cav cavalry 20m 12f light recon art artillery 25m 40s 10g 24f light spy infiltrator light recon assault spy linf light infantry 25m 1s 15f light assault tra train 990m 990s 200g 990p 500i 500d 100b 990f 990o 990l 990h 150r supply train heavy inf infantry 100m 24f light assault sup supply 25m 200s 10g 300p 100i 100d 10b 300f 200l 100h supply light lat lt artillery 25m 20s 6g 12f light com commando 3s light recon assault spy pony ponies 35m 15f light aau aa unit 20m 5s 12f light flak hat hvy artillery 25m 80s 12g 24f h1b h1b engineer 20m 3s 15f engineer mar marines 100m 4s 60f light marine assault har hvy armor 100m 3s 48f eng engineer 20m 3s 12f engineer light assault lar lt armor 50m 4s 30f light recon Command : show l s 85 Printing for tech level '85' s v s r r a f a a x l p i p a n c i m a f f p n att def vul d s y d g c r m f c u l d shp shepherd 0.9 0.9 63 25 16 1 1 0 0 0 0 0 0 0 0 0 cav cavalry 1.6 0.6 73 36 18 4 3 0 0 0 0 0 0 0 0 0 art artillery 0.1 0.5 64 20 20 1 0 8 46 5 2 1 0 0 0 0 spy infiltrator 0.0 0.0 74 36 18 4 3 0 0 0 0 0 0 0 0 0 linf light infantry 1.3 1.9 55 31 15 2 1 0 0 0 1 1 0 0 0 0 tra train 0.0 0.0 111 11 25 3 0 0 0 0 0 0 0 0 5 12 inf infantry 1.3 1.9 55 28 15 2 1 0 0 0 0 0 0 0 0 0 sup supply 0.1 0.2 74 28 20 1 0 0 0 0 0 0 0 0 0 0 lat lt artillery 0.2 0.7 56 33 18 1 1 6 9 3 1 1 0 0 0 0 com commando 0.0 0.0 75 35 18 4 3 0 0 0 0 0 0 0 0 0 pony ponies 1.9 0.5 65 36 17 2 2 0 0 0 0 0 0 0 0 0 aau aa unit 0.6 1.2 56 19 20 1 1 0 0 0 1 2 0 0 0 0 hat hvy artillery 0.0 0.2 56 13 20 1 0 11 93 8 4 1 0 0 0 0 h1b h1b engineer 0.6 0.6 71 21 16 2 1 0 0 0 1 1 0 0 0 0 mar marines 1.7 2.8 57 27 14 2 1 0 0 0 1 2 0 0 0 0 har hvy armor 4.1 0.9 47 24 17 1 1 0 0 0 2 1 10 2 0 0 eng engineer 1.4 2.8 47 27 14 2 1 0 0 0 1 1 0 0 0 0 lar lt armor 2.3 1.1 47 45 15 4 4 0 0 0 1 2 25 1 0 0 NUKES : Command : show n b 85 Printing for tech level '85' lcm hcm oil rad avail tech res $ 10kt fission 50 50 25 70 49 70 0 $ 10000 15kt fission 50 50 25 80 51 72 0 $ 12500 50kt fission 60 60 30 90 60 74 0 $ 15000 1ktbb fission 30 30 30 20 28 74 0 $ 5000 100kt fission 75 75 40 120 77 76 0 $ 20000 250kt fission 90 90 50 150 94 78 0 $ 25000 1mt fission 125 125 75 200 130 80 0 $ 30000 Command : show n c 85 Printing for tech level '85' blst dam lbs tech res $ abilities 10kt fission 3 70 7 70 0 $ 10000 15kt fission 3 90 7 72 0 $ 12500 50kt fission 3 100 7 74 0 $ 15000 1ktbb fission 0 100 6 74 0 $ 5000 100kt fission 4 125 7 76 0 $ 20000 250kt fission 5 140 7 78 0 $ 25000 1mt fission 7 180 7 80 0 $ 30000 ----- HERE ARE THE SHIPS/UNITS/PLANES/NUKES ----- Animated Map: BBB4 - GameSettingFiles (contains econfig, planes/units/sectors/ships/nukes/etc config files) Ringtone was granted the win, by decree of the deity umpire and chatter from the other countries.
http://www.angelfire.com/empire2/bungholio/BBB4.html
crawl-002
refinedweb
3,607
83.19
Dragon Curve Using Python Introduction: Dragon Curve Using Python The Dragon Curve is an interesting and beautiful fractal. It is actually a family of self-similar fractals, but I will be focusing on the most famous, the Heighway Dragon, named after one of the NASA physicists who studied it, John Heighway. The Dragon Curve gets it's name for looking like dragon, perhaps a sea dragon (apparently). Even if it doesn't look like a dragon to you, you must admit it's a cool name. And it's not just the name, or the shape, the Dragon Curve has lots of amazing properties: -It can tessellate the plane -It is made of only one line -That line never goes over itself, so theoretically one big enough can hit every single point in a grid exactly once -If you take a strip of paper and fold it in half, then fold that in half, and that in half, and so on, and then unfurl it, you will get a Dragon Curve -If you take all the solutions to polynomials with coefficients in a certain range and graph them on an complex plane, you will find the dragon curve in places Here are some sites that can tell you more: - - - - In this Instructable, I will be showing you how to write a python 3 program using the turtle graphics module to generate the Dragon Curve. Step 1: The Pattern The Dragon Curve, like all fractals, has multiple, progressively more complex forms, called iterations. Above are the 2nd, 4rd, 6th, and 8th iterations of the Dragon Curve. They are all more complex than the last, but all have the same shape, though different orientations. From my personal observations, it seems to turn 45 degrees clockwise every iteration. The cover picture is the 17th iteration. Looking at the above pictures, you can see that the Curve is made of multiple segments at right angles. You can represent each iteration as a string of right and left turns. For example, the first iteration is: R The second iteration is: R R L The third iteration is: R R L R R L L And so on. You can therefor generate different iterations of the Dragon Curve by generating these strings. There are many more ways of generating the Curve, but I will be focusing on this method. In order to find the next iteration from one you already have: 1. Add a right turn to the string 2. Take the original string and flip it backward (first character last, last first) 3. Take the flipped version and switch all the rights to lefts and the lefts to rights 4. Add the flipped version to the new string we made in the first step Lets try this out to to find the 4th iteration from the 3rd: 1. RRLRRLL + R = RRLRRLLR 2. RRLRRLL flipped = LLRRLRR 3. LLRRLRR switched = RRLLRLL 4. RRLRRLLR + RRLLRLL = RRLRRLLRRRLLRLL This way we find that the 4th iteration can be represented as RRLRRLLRRRLLRLL. In the next step I will show how to automate this task and draw it using python. Step 2: The Program In this step I will explain each part of the program. A fully commented version is available at the bottom. Preparation- To start we must have a few things: import turtle This imports the turtle module, which we will be using to display the rights and lefts we generate. You use it to draw by giving commands to the 'turtle', which moves about the screen, drawing a line. This module is useful for our purposes as you can easily tell the turtle to turn right or left or go forward without having to calculate where to place lines. r = 'r' l = 'l' Here we create variables r and l and assign them their appropriate characters. This step is not truly necessary as you could just write it in the program as a string, but I believe the program flows more easily using variables rather than strings. old = r new = old This sets both old, which is the last iteration generated, and new, the iteration being generated, to a right. Old is set to right because that is the first iteration that all of them are based off of. New is also set to right just in case the first iteration it requested. In that case, it will not generate anything and immediately print new. iteration=int(input('Enter iteration:'))length=int(input('Enter length of each segment:')) pencolor=input('Enter pen color:') bgcolor=input('Enter background color:') This block of code takes in all of the users choices, the iteration to be generated (iteration), the length of each segment that makes up the Dragon Curve (length), the color to draw the Curve in (pencolor), and the background color of the turtle graphics window (bgcolor). The segment length can be useful when generating higher iterations as the the Curve can extend off screen with too long segments. The last two choices, the colors, are fun to play around with to make pretty designs. cycle = 1 A certain iteration is generated by building up each iteration from the first R, cycling through some steps to get the next one each time. This variable holds the current cycle, so we first assign it the value of 1. Generation- Now we can start generating the R and L pattern for the desired iteration: while cycle<iteration: We will continue generating the next iteration until the cycle variable tells us we have reached to iteration requested by no longer being less than it. new = (old) + (r) This is the first step of generating the next iteration, adding a right to the end of the last one. We will save this to the new iteration. old = old[::-1] This statement completes the second step by using an extended slice to reverse the order of the characters in the old iteration string. The syntax of this string slicing method is [begin:end:step]. Leaving off the begin and end and setting step to -1, makes the entire string switch around. for char in range(0,len(old)): if old[char] == r: old = (old[:char])+ (l) + (old[char+1:]) elif old[char] == l: old = (old[:char]) + (r) + (old[char+1:]) This block of code switches all of the rights and lefts in the reversed old iteration. It is contained in a for loop which operates for each of the characters in the string, the range being the string length. For each character, it is called from the string and tested. If it's a right, it is replaced with a left by adding the part of the string before the character, a left, and the rest of the string after the character. Otherwise, if it is a left, the same method is used to replace the character with a right. new = (new) + (old) This takes new, which is the original old plus a right, and adds the reversed and switched old on the end. The result is saved to new. The next iteration has been generated! old = new By saving the new iteration to old, we can use it the next cycle to find the next iteration. cycle = cycle + 1 This cycle is done and so we advance the cycle variable. The while loop keeps generating the next iteration until it is satisfied by the cycle variable reaching the iteration requested. Display- printans = input('Display r/l form? (y/n):') if printans=='y': print(new) Especially with the higher iterations, the string of rights and lefts is so long it is impractical and messy to print it out. This code gives the user a choice. It takes a yes or no answer to whether or not the user wants it printed and, if the answer is yes, prints new, which holds the final iteration. turtle.ht() turtle.speed(0) turtle.color(pencolor) turtle.bgcolor(bgcolor) Here we set up the turtle graphics window. We started by hiding the turtle icon and switching off the animation so it is faster. In addition, we apply the user requested colors for the drawing and the background. turtle.forward(length) Here we tell the turtle to go forward the segment length, making the first line which all the rights and lefts come off of. for direction in range(0,len(new)): if new[char] == (r): turtle.right(90) turtle.forward(length) elif new[char] == (l): turtle.left(90) turtle.forward(length) This for loop is similar to the one that switched all the rights and lefts previously in the program. It checks each character in the right-left sequence and make the turtle turn right if it is a right and left if it is a left. After each turn, it also goes forward the desired segment length to complete the angle. In this way, it will graph the entire sequence. Step 3: Have Fun! This program is great to play around with! Try changing the iteration, segment length, and the colors to find something beautiful. Above are some of the ones I've made. Happy Dragon Curving! This looks awesome! I've never heard of the Dragon Curve before. I love learning new and interesting things like this. Thank you! It's true it's not very well known. I learned it from my math teacher myself. Here is what worked for me.
http://www.instructables.com/id/Dragon-Curve-Using-Python/
CC-MAIN-2017-43
refinedweb
1,564
70.02
Unit 6: Signals Table of Contents - 1. What are signals and how are they used - 2. The Wide World of Signals - 3. Signals from the Command Line - 4. Handling and Generating Signals - 5. Alarm Signals and SIGALRM - 6. sigaction()and Reentrant Functions 1 What are signals and how are they used A signal is a software interrupt, a way to communicate information to a process about the state of other processes, the operating system, and hardware. A signal is an interrupt in the sense that it can change the flow of the program —when a signal is delivered to a process, the process will stop what its doing, either handle or ignore the signal, or in some cases terminate, depending on the signal. Signals may also be delivered in an unpredictable way, out of sequence with the program due to the fact that signals may originate outside of the currently executing process. Another way to view signals is that it is a mechanism for handling asynchronous events. As opposed to synchronous events, which is when a standard program executes iterative, that is, one line of code following another. Asynchronous events occur when portions of the program execute out of order. Asynchronous events typically occur due to external events originating at the hardware or operating system; the signal, itself, is the way for the operating system to communicate these events to the processes so that the process can take appropriate action. 1.1 How we use signals Signals are used for a wide variety of purposes in Unix programming, and we've already used them in smaller contexts. For example, when we are working in the shell and wish to "kill all cat programs" we type the command: #> killall cat The killall command will send a signal to all processes named cat that says "terminate." The actually signal being sent is SIGTERM, whose purposes is to communicate a termination request to a given process, but the process does not actually have to terminate … more on that later. We've also used and looked at signals in the context of terminal signaling which is how programs stop, start and terminate. When we type Ctrl-c that is the same as sending a SIGINT signal, and when we type Ctrl-z that is the same as sending a SIGTSTP signal, and when we type fg or bg that is the same as sending a SIGCONT signal. Each of these signals describe an action that the process should take in response. This action is outside the normal control flow of the program, the events arrive asynchronously requiring the process to interrupt its current operations to respond to the event. For above signals, the response is clear — SIGTERM terminate, SIGSTOP stop, SIGCONT continue — but for other signals, the programmer can choose the correct response, which may be to simply ignore the signal all together. 2 The Wide World of Signals Every signal has a name, it starts with SIG and ends with a description. We can view all the signals in section 7 of the man pages, below are the standard Linux signals you're likely to interact with: 2.1 Signal Names and Values Notice that each signal has a name, value, and default action. The signal name should start to become a bit more familiar, the value of a signal is actually the same as the signal itself. In fact, the signal name is just a #defined value, and we can see this by looking at the sys/signal.h header file: FPE 8 /* floating point exception */ #define SIGKILL 9 /* kill (cannot be caught or ignored) */ //(...) In code we use both the #defined value and the number. In general, it is easier to remember the name of the signal, but some signals are often referred to by value, in particular, SIGKILL, whose value 9 is affectionately used in the phrase: "Kill 9 that process." 2.2 Default Actions of Signals Each signal has a default action. There are four described in the table: Term: The process will terminate Core: The process will terminate and produce a core dump file that traces the process state at the time of termination. Ign: The process will ignore the signal Stop: The process will stop, like with a Ctrl-Z Cont: The process will continue from being stopped As we willsee later, for some signals, we can change the default actions. A few signals, which are control signals, cannot have their default action changed, these include SIGKILL and SIGABRT, which is why "kill 9" is the ultimate kill statement. 3 Signals from the Command Line Terminology for delivering signals is to "kill" a process with the kill command. The kill command is actually poorly named — originally, it was only used to kill or terminate a process, but it is currently used to send any kind of signal to a process. The difference between kill and killall is that kill only sends signals to process identified by their pid, killall sends the signal to all process of a given name. 3.1 Preparing for the kill A good exercise to explore the variety of signals and how to use them is to actually use them from the command line. To start, we can open two terminals, in one, we execute the loop program: /*loop.c*/ int main(){ while(1); } Which will just loop forever, and in other terminal, we will kill this process with various signals to see how it responds. Let's start with a signal we all love to hate, the signal that indicates a Segmentation Fault occurs. You might not have realized this, but a Segmentation Fault is a signal generated from the O.S. that something bad has happened. Let's simulate that effect: killall -SIGSEGV loop And in the in the terminal where loop is running, the result is eerily familiar. #>./loop Segmentation fault: 11 The 11 following the message is the signal number: 11 is the signal number for SIGSGV. Note that the default response to a SIGSEGV is to terminate with a core dump. We can explore some of the more esoteric signals and see similar results occur when the program terminates: | Signal | Output | |------------+-----------------------------| | SIGKILL | Killed: 9 | | SIGQUIT | Quit: 3 | | SIGILL | Illegal instruction: 4 | | SIGABRT | Abort trap: 6 | | SIGFPE | Floating point exception: 8 | | SIGPIPE | (no output) | | SIGALAR | Alarm clock: 14 | | SIGUSR1 | User defined signal 1: 30 | | SIGUSR2 | User defined signal 2: 31 | |------------+-----------------------------| 3.2 Sending Terminal Signals with Kill Let's restart the loop program and use kill to simulate terminal signaling. We've been discussing how the terminal control will deliver signals to stop, continue, and terminate a process; there's no mystery here. Those signals are signals that you can send yourself with kill. Let's look at starting the loop program again, but this time killall -SIGSTOP loop And again, the result in the other terminal is quite familiar: #>./loop [1]+ Stopped ./loop If we were to run jobs, we can see that loop is stopped in the background. This is the same as typing Ctrl-z in the terminal. #> jobs [1]+ Stopped ./loop Before, we'd continue the loop program with a call to bg or fg, but we can use kill to do that too. From the other terminal: killall -SIGCONT loop And, after we run jobs, the loop program is running in the background: #> jobs [1]+ Running ./loop & Finally, let's terminate the loop program. The Ctrl-c from the terminal actually generates the SIGINT signal, which stands for "interrupt" because a Ctrl-c initiates an interrupt of the foreground process, which by default terminates the process. killall -SIGINT loop And the expected result: #> jobs [1]+ Interrupt: 2 ./loop & 4 Handling and Generating Signals Now that we have a decent understanding of signals and how they communicate information to a process, let's move on to investigate how we can write program that take some action based on a signal. This is described as signal handling, a program that handles a signal, either by ignoring it or taking some action when the signal is delivered. We will also explore how signals can be sent from one program to another, again, we'll use a kill for that. 4.1 Hello world of Signal Handling The primary system call for signal handling is signal(), which given a signal and function, will execute the function whenever the signal is delivered. This function is called the signal handler because it handles the signal. The signal() function has a strange declaration: int signal(int signum, void (*handler)(int)) That is, signal takes two arguments: the first argument is the signal number, such as SIGSTOP or SIGINT, and the second is a reference to a handler function whose first argument is an int and returns void. It's probably best to explore signal() through an example, and hello world program is where we always start. #include <stdlib.h> #include <stdio.h> #include <signal.h> /*for signal() and raise()*/ void hello(int signum){ printf("Hello World!\n"); } int main(){ //execute hello() when receiving signal SIGUSR1 signal(SIGUSR1, hello); //send SIGUSR1 to the calling process raise(SIGUSR1); } The above program first establishes a signal handler for the user signal SIGUSR1. The signal handling function hello() does as expected: prints "Hello World!" to stdout. The program then sends itself the SIGUSR1 signal, which is accomplished via raise(), and the result of executing the program is the beautiful phrase: #> ./hello_signal Hello World! 4.2 Asynchronous Execution Some key points to take away from the hello program is that the second argument to signal() is a function pointer, a reference to a function to call. This tells the operating system that whenever this signal is sent to this process, run this function as the signal handler. Also, the execution of the signal handler is asynchronous, which means the current state of the program will be paused while the signal handler executes, and then execution will resume from the pause point, much like context switching. Let's look at another example hello world program: /* hello_loop.c*/ void hello(int signum){ printf("Hello World!\n"); } int main(){ //Handle SIGINT with hello signal(SIGINT, hello); //loop forever! while(1); } The above program will set a signal handler for SIGINT the signal that is generated when you type Ctrl-C. The question is, when we execute this program, what will happen when we type Ctrl-C? To start, let's consider the execution of the program. It will register the signal handler and then will enter the infinite loop. When we hit Ctrl-C, we can all agree that the signal handler hello() should execute and "Hello World!" prints to the screen, but the program was in an infinite loop. In order to print "Hello World!" it must have been the case that it broke the loop to execute the signal handler, right? So it should exit the loop as well as the program. Let's see: #> ./hello_loop ^CHello World! ^CHello World! ^CHello World! ^CHello World! ^CHello World! ^CHello World! ^CHello World! ^\Quit: 3 As the output indicates, every time we issued Ctrl-C "Hello World!" prints, but the program returns to the infinite loop. It is only after issuing a SIGQUIT signal with Ctrl-\ did the program actually exit. While the interpretation that the loop would exit is reasonable, it doesn't consider the primary reason for signal handling, that is, asynchronous event handling. That means the signal handler acts out of the standard flow of the control of the program; in fact, the whole program is saved within a context, and a new context is created just for the signal handler to execute in. If you think about it some more, you realize that this is pretty cool, and also a totally new way to view programming. 4.3 Inter Process Communication Signals are also a key means for inter-process communication. One process can send a signal to another indicating that an action should be taken. To send a signal to a particular process, we use the kill() system call. The function declaration is below. int kill(pid_t pid, int signum); Much like the command line version, kill() takes a process identifier and a signal, in this case the signal value as an int, but the value is #defined so you can use the name. Let's see it in use. /*ipc_signal.c*/ void hello(){ printf("Hello World!\n"); } int main(){ pid_t cpid; pid_t ppid; //set handler for SIGUSR1 to hello() signal(SIGUSR1, hello); if ( (cpid = fork()) == 0){ /*CHILD*/ //get parent's pid ppid = getppid(); //send SIGUSR1 signal to parrent kill(ppid, SIGUSR1); exit(0); }else{ /*PARENT*/ //just wait for child to terminate wait(NULL); } } In this program, first a signal handler is established for SIGUSR1, the hello() function. After the fork, the parent calls wait() and the child will communicate to the parent by "killing" it with the SIGUSR1 signal. The result is that the handler is invoked in the parent and "Hello World!" is printed to stdout from the parent. While this is a small example, signals are integral to inter process communication. In previous lessons, we've discussed how to communicate data between process with pipe(), signals is the way process communicate state changes and other asynchronous events. Perhaps most relevant is state change in child processes. The SIGCHLD signal is the signal that gets delivered to the parent when a child terminates. So far, we've been handling this signal implicitly through wait(), but you can choose instead to handle SIGCHLD and take different actions when a child terminates. We'll look at that in more detail in a future lesson. 4.4 Ignoring Signals So far, our handlers have been doing things — mostly, printing "Hello World!" — but we might just want our handler to do nothing, essentially, ignoring the signal. That is easy enough to write in code, for example, here is a program that will ignore SIGINT by handling the signal and do nothing: /*ingore_sigint.c*/ #include <signal.h> #include <sys/signal.h> void nothing(int signum){ /*DO NOTHING*/ } int main(){ signal(SIGINT, nothing); while(1); } And if we run this program, we see that, yes, it Ctrl-c is ineffective and we have to use Ctrl-\ to quit the program: >./ignore_sigint ^C^C^C^C^C^C^C^C^C^C^\Quit: 3 But, it would seem like a pain to always have to write the silly little ignore function that does nothing, and so, when there is a need, there is a way. The signal.h header defines a set of actions that can be used in place of the handler: SIG_IGN: Ignore the signal SIG_DFL: Replace the current signal handler with the default handler With these keywords, we can rewrite the program simply as: int main(){ // using SIG_IGN signal(SIGINT, SIG_IGN); while(1); } 4.5 Changing and Reverting to the default handler Setting a signal handler is not a singular event. You can always change the handler and you can also revert the handler back to default state. For example, consider the following program: /*you_shot_me.c*/ void handler_3(int signum){ printf("Don't you dare shoot me one more time!\n"); //Revert to default handler, will exit on next SIGINT signal(SIGINT, SIG_DFL); } void handler_2(int signum){ printf("Hey, you shot me again!\n"); //switch handler to handler_3 signal(SIGINT, handler_3); } void handler_1(int signum){ printf("You shot me!\n"); //switch handler to handler_2 signal(SIGINT, handler_2); } int main(){ //Handle SIGINT with handler_1 signal(SIGINT, handler_1); //loop forever! while(1); } The program first initiates handler_1() as the signal handler for SIGINT. After the first Ctrl-c, in the signal handler, the handler is changed to handler_2(), and after the second Ctrl-c, it is change again to handler_3() from handler_2(). Finally, in handler_3() the default signal handler is reestablished, which is to terminate on SIGINT, and that is what we see in the output: #> ./you_shout_me ^CYou shot me! ^CHey, you shot me again! ^CDon't you dare shoot me one more time! ^C 4.6 Some signals are more equal than others The last note on signal handling is that not all signals are created equal, and some signals are more equal than others. That means, that you cannot handle all signals because it could potentially place the system in an unrecoverable state. The two signals that can never be ignored or handled are: SIGKILL and SIGSTOP. Let's look at an example: /* ignore_stop.c */ int main(){ //ignore SIGSTOP ? signal(SIGSTOP, SIG_IGN); //infinite loop while(1); } The above program tries to set the ignore signal handler for SIGSTOP, and then goes into an infinite loop. If we execute the program, we find that oure efforts were fruitless: #>./ignore_stop ^Z [1]+ Stopped ./ignore_stop The program did stop. And we can see the same for a program that ignores SIGKILL. int main(){ //ignore SIGSTOP ? signal(SIGKILL, SIG_IGN); //infinite loop while(1); } #>./ignore_kill & [1] 13129 #>kill -SIGKILL 13129 [1]+ Killed: 9 ./ignore_kill The reasons for this are clearer when you consider that all programs must have a way to stop and terminate. These processes cannot be interfered with otherwise operating system would loose control of execution traces. 4.7 Checking Errors of signal() The signal() function returns a pointer to the previous signal handler, which means that here, again, is a system call that we cannot error check in the typical way, by checking if the return value is less than 0. This is because a pointer type is unsigned, there is no such thing as negative pointers. Instead, a special value is used SIG_ERR which we can compare the return value of signal(). Here, again, is the program where we try and ignore SIGKILL, but this time with proper error checking: /*signal_errorcheck.c*/ int main(){ //ignore SIGSTOP ? if( signal(SIGKILL, SIG_IGN) == SIG_ERR){ perror("signal");; exit(1); } //infinite loop while(1); } And the output from the perror() is clear: #>./signal_errorcheck signal: Invalid argument The invalid argument is SIGKILL which cannot be handled or ignored. It can only KILL! 5 Alarm Signals and SIGALRM In the last lesson, we explored signal handling and user generated signals, particularly those from the terminal, the SIGKILL, and the user signals, or the SIGUSR1 and SIGUSR2. In this lessons, we'll explore O.S. generated signals. We'll start with SIGALRM. 5.1 Setting an alarm A SIGALRM signal is delivered by the Operating System via a request from the user occuring after some amount of time. To request an alarm, use the alarm() system call: unsigned int alarm(unsigned int seconds); After seconds have passed since requesting the alarm(), the SIGALRM signal is delivered. The default behavior of SIGALRM is to terminate, so we can catch and handle the signal, leading to a nice hello world program: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <sys/signal.h> void alarm_handler(int signum){ printf("Buzz Buzz Buzz\n"); } int main(){ //set up alarm handler signal(SIGALRM, alarm_handler); //schedule alarm for 1 second alarm(1); //do not proceed until signal is handled pause(); } The program looks very much like our other signal handling programs, except this time the signal is delivered via the alarm() call. Additionally, we are introducing a new system call, pause(). The pause() call will "pause" the program until a signal is delivered and handled. Pausing is an effective way to avoid busy waiting, e.g., while(1);, because the process is suspended during a pause and awoken following the return of the signal handler. 5.2 Recurring Alarms Alarm can be set continually, but only one alarm is allowed per process. Subsequent calls to alarm() will reset the previous alarm. Suppose, now, that we want to write a program that will continually alarm every 1 second, we would need to reset the alarm once the signal is delivered. The natural place to do that is in the signal handler: /* buzz_buzz.c*/ void alarm_handler(int signum){ printf("Buzz Buzz Buzz\n"); //set a new alarm for 1 second alarm(1); } int main(){ //set up alarm handler signal(SIGALRM, alarm_handler); //schedule the first alarm alarm(1); //pause in a loop while(1){ pause(); } } After the first SIGALRM is delivered and "Buzz Buzz Buzz" is printed, another alarm is schedule via alarm(1). The process will resume after the pause(), but since it is in a loop, it will return to a suspended state. The result is an alarm clock buzzing every 1 second. Running with time utility, after about 4 seconds, we saw 4 buzzers. #> time ./buzz_buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz ^C real 0m4.473s user 0m0.001s sys 0m0.002s One thing to note about this example is that while the signal handler code runs asynchronously, it is part of the process program. Calling alarm() not in the main method is a perfectly fine thing to do and necessary. 5.3 Resetting Alarms Let's suppose we want to add a snooze function to our alarm. If the user enters Ctrl-c then we want to reset the alarm to 5 seconds before buzzing again, like snooze. We can easily add a signal handler to do that.); //schedule the first alarm alarm(1); //pause in a loop while(1){ pause(); } } And we can see that the output matches our expectation using the time utility. Note we need to use Ctrl-\ to quit the process. #>time ./snooze_buzz Buzz Buzz Buzz Buzz Buzz Buzz ^CSnoozing! Buzz Buzz Buzz Buzz Buzz Buzz ^CSnoozing! Buzz Buzz Buzz Buzz Buzz Buzz ^\Quit: 3 real 0m15.469s user 0m0.001s sys 0m0.003s There is some interesting dilemas here: What happened to the last alarm? And, what happens if I type Ctrl-C multiple times, how long will it snooze? The anser is, only one alarm may be schedule at one time. Calling alarm() again will reset any previous alarms, so the answer to the questions is that the previous alarm is replaced and subsequent snoozes only resets the previous snooze back to 5 seconds. If that's the case, how might we unschedule a previously scheduled alarm. The way to do that is by scheduling an alarm for 0 seconds, alarm(0). For example, we can finish our alarm clock by adding an "off" switch that listens for Ctrl-\ or SIGQUIT, which will unschedule the alarm and reset the signal handler for Ctrl-c back to the default. void sigquit_handler(int signum){ printf("Alarm Off\n"); //turn off all pending alarms alarm(0); //reinstate default handler for SIGINT // Ctrl-C will now terminate program signal(SIGINT, SIG_DFL); }); //set up signint handler signal(SIGQUIT, sigquit_handler); //schedule the first alarm alarm(1); //pause in a loop while(1){ pause(); } } And we can track the output: #>./offswitch_buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz Buzz ^CSnoozing! ^\Alarm Off ^C While this is a simple example, it demonstrates a lot of power in signal handling and how even in asynchronous settings, a lot of power programming concepts can be used. It's a totally different way to communicate with programs. 6 sigaction() and Reentrant Functions Now, we're starting to get a bit better with signal handling, we need to turn our attention back to the pesky asynchronous nature of signal handling. In the previous part, we harnessed asynchronicity to program with just the signal handlers, but we now need to consider how signal handlers might affect the primary control flow, in particularly when a system call is executed. 6.1 sigaction() To start, we need to learn a more advanced form of the signal() system call, sigaction(). Like signal(), sigaction() allows the programmer to specify a signal handler for a given signal, but it also enables the programmer to retrieve additional information about the signaling process and set additional flags and etc. The decleration of sigaction() is as follows: int sigaction(int signum, const struct sigaction *act, struct sigaction *oldact); The first argument is the signal to be handled, while the second and third arguments are references to a struct sigaction. It is in the struct sigaction that we set the handler function and additional arguments. It has the following members: struct sigaction { void (*sa_handler)(int); void (*sa_sigaction)(int, siginfo_t *, void *); sigset_t sa_mask; int sa_flags; }; The first two fields, sa_handler and sa_sigaction are function references to signal handlers; sa_handler has the same type as the handlers we've been using previously, and we can now write a simple hello world program with sigaction(). void handler(int signum){ printf("Hello World!\n"); } int main(){ //declare a struct sigaction struct sigaction action; //set the handler action.sa_handler = handler; //call sigaction with the action structure sigaction(SIGALRM, &action, NULL); //schedule an alarm alarm(1); //pause pause(); } We'll look at using the more advanced sa_sigaction handler later, but there are other important differences between signal() and sigaction() that are worth exploring. In particular, using sigaction() allows us to explores reentrant functions. 6.2 Reentrant Functions Recall that when a signal handler is invoked, this is done outside the control flow of the program. Normally, the asynchronous invocation of the signal handler is not problematic: the context of the program is saved; the signal handler runs; and, the original context of the program is restored. However, what happens when the context of the program is within a blocking function, like reading from a device ( read()) or waiting for a process to terminate ( wait()). The arrival of the signal and invocation of the signal handler will interrupt the process, waking it from a blocking state to execute the signal handler, but what happens when it returns to the program? The answer to that question depends on the operation being performed. Most functions are reentrant and can be restarted in such cases, but others are explicitly not. For example pause() is explicitly not reentrant by design; once interrupted, it should not return to a blocking state. But functions like read() and wait() need to be told to restart. 6.3 Interrupting System call EINTR Let's first consider a simple example. Here is a rude little program that will ask for a users name, but if they don't answer within 1 second, it starts barking at the user "What's taking so long?" void handler(int signum){ printf("What's taking so long?\n"); alarm(1); } int main(){ char name[1024]; struct sigaction action; action.sa_handler = handler; sigaction(SIGALRM, &action, NULL); alarm(1); printf("What is your name?\n"); //scanf returns the number of items scanned if( scanf("%s", name) != 1){ perror("scanf fail"); exit(1); } printf("Hello %s!\n", name); } Running it, you can see that, yes, if you were to enter your name quickly, the program plays nice: #> ./scanf_fail What is your name? adam Hello adam! But, if you're late at all, it should start the barking process, but that's actually not what happens: #> ./scanf_fail What is your name? What's taking so long? scanf fail: Interrupted system call Instead, we get an error in the scanf() function, which is a library function that must call read() to read from standard input. The read() is interrupted, which results in the error message "Interrupted system call" whose error number is EINTR. From the man page for read: EINTR The call was interrupted by a signal before any data was read; see signal(7). And we can follow up by reading in man 7 signal: Interruption of System Calls and Library Functions by Signal Handlers If)). To avoid this scenario we need to set an additional flag for sigaction(): struct sigaction { void (*sa_handler)(int); void (*sa_sigaction)(int, siginfo_t *, void *); sigset_t sa_mask; int sa_flags; //<--- }; What we are going to do, is update our annoying program from before to use a SA_RESTART flag so that read() will be restarted after the signal handler returns: 6.4 SA_RESTART If we take another look at the struct sigaction, there is a field for flags: void handler(int signum){ printf("What's taking so long?\n"); alarm(1); } int main(){ char name[1024]; struct sigaction action; action.sa_handler = handler; action.sa_flags = SA_RESTART; //<-- restart sigaction(SIGALRM, &action, NULL); alarm(1); printf("What is your name?\n"); //scanf returns the number of items scanned if( scanf("%s", name) != 1){ perror("scanf fail"); exit(1); } printf("Hello %s!\n", name); } 6.5 Not all System Calls are Reentrant It might seem like we've solved all the problems with the SA_RESTART flag, but not all system calls are reentrant. You can see a complete listing in man 7 signal, but we'll focus on one you might encounter in your programing. The sleep() system call is not reentrant. We can see this with a simple example: void handler(int signum){ printf("Alarm\n"); alarm(1); } int main(){ struct sigaction action; action.sa_handler = handler; action.sa_flags = SA_RESTART; //<-- restart sigaction(SIGALRM, &action, NULL); //alarm in 1 second alarm(1); //meanwhile sleep for 2 seconds sleep(2); //how long does the program run for? } A handler for SIGALRM is established with the SA_RESTART flag, so all should be good. An alarm is scheduled for 1 second and then the program should sleep for 2 seconds. The question is: How long does the program take to run? There are two possibilities. First, it could take 2 seconds because SIGALRM is handled the sleep() is restarted with an remaining 1 second to sleep, totaling 2 seconds worth runtime. Alternatively, the program will run for 1 second; once SIGALRM is handled, the sleep will not be restarted, and the program terminates with 1 second of runtime. Let's run it and find out. #> time ./sleep_restart Alarm real 0m1.005s user 0m0.001s sys 0m0.002s The program runs for only 1 second, and that is because sleep() is not reentrant. It cannot be restarted after a signal handler. This is just a singular example, but there are other system calls that meet these conditions, some you might also use, like send() and recv() for network socket programming, and understanding the properties of reentrant system calls is important to becoming an effective systems programmer.
https://www.usna.edu/Users/cs/aviv/classes/ic221/s17/units/06/unit.html
CC-MAIN-2018-43
refinedweb
5,019
61.87
, I showed you how to setup Build and Release for a React.JS app using Azure DevOps. We also discussed a test using a combination of Jest, Mocha and Selenium that was executed as part of the release after deploying the app on Azure App Service. In this tutorial, I will extend the concept by creating a React.JS app using TypeScript, and transpiling it as a regular React.JS app before deploying it. TypeScript is a way to write typed JavaScript. You can create React.JS compliant JavaScript code using TypeScript. When it is transpiled, pure JavaScript is generated which can then be deployed. Before we start, I want to emphasize that this is not a tutorial on TypeScript, but it is an overview of how to build and deploy a React.JS app that is written using TypeScript. Editorial Note: This article assumes familiarity with TypeScript. If you are new to TypeScript, check our TypeScript tutorials. To create such TypeScript code that can get transpiled, create-react-app utility can be used. The command to create such an app is “npx-create-react-app typescript” where “x” is the name of the application. This command will create a React.JS app with a source structure in line with TypeScript requirements. It will have an “index.tsx” file instead of index.js file and in addition to “package.json” file, it will have “tsconfig.json” file.There will also be a few more .ts files and App.test.tsx file for testing the app. First move the entire code under a folder called ssgsems. This step is necessary because some tasks in the build cannot use the root of the repository. Now, under the src folder, which is created by default, let’s add a new folder named components and a file named “hellouser.tsx”. This is going to be the React.JS component that will be used for interactions with the user. Add the following code to that file: import * as React from 'react'; interface IState { user: string; userbuffer: string; } class Description extends React.Component<IState> { public state: IState = { user: "", userbuffer: "", }; public greet = () => { var user = ""; if (this.state.user != "") {user = this.state.user + ", " + this.state.userbuffer;} else {user = this.state.userbuffer;} this.setState({ user }); }; public stateChange = (event: any) => { this.state.userbuffer = event.target.value; } public render() { return ( <div> <p>Enter your name: <input type="text" onChange={this.stateChange} /> </p> <button onClick={this.greet}>Greet User</button> <p>Hello {this.state.user}</p> </div> ); } } export default Description; This component will render a textbox and a button. For this app to get working, you will also need to make a change in index.tsx as follows: import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import Hello from "./components/hellouser"; import * as serviceWorker from './serviceWorker'; ReactDOM.render( <Hello user={"World"} userbuffer={"World"} /> , document.getElementById('root') as HTMLElement ); // If you want your app to work offline and load faster, you can change // unregister() to register() below. Note this comes with some pitfalls. // Learn more about service workers: serviceWorker.unregister(); Notice the import statement for the component that you had created earlier and its invocation in the render method. When the user enters her / his name in the textbox and clicks the button, the app will send a greeting. The app will then display all the users it has greeted in the current session. Let’s create a team project named ReactAppTs with a git repository: git remote add origin https://<<yourazuredevopsaccount>>.visualstudio.com/DefaultCollection/ReactAppTs/_git/ReactAppTs Commit and Push the code to the remote repository. Detailed steps for these are available in the earlier article that I had written Now that the code is in the repository, you can create a build pipeline definition. This pipeline definition will have only four tasks: 1. npm install: This will install all the necessary node modules on the agent. It will require the path of folder that contains package.json file. 2. npm custom: This task is to create an optimized version of JavaScript code using the built-in react script to convert TypeScript to JavaScript. You have to also provide the value “run-script build” to the parameter Commands and Arguments. When executed, this task will create a folder named “build” under the ssgsems folder on the agent. 3. Archive Files: This task will create a .ZIP package of the created “build” folder. Set “ssgsems/build” as the folder to archive and “$(Build.ArtifactStagingDirectory)/build.zip” as the archive file to be created. 4. Publish Artifact: This task will publish the “build.zip” archive as a drop artifact. You have to provide path of the created “build.zip” archive, ““$(Build.ArtifactStagingDirectory)/build.zip” as path to publish. When you queue and run this build pipeline, it will create an artifact named “drop” which will contain a build.zip file that is used to deploy the app. Your next step is to create a release pipeline that will deploy the app to Azure App Service – Web App. Create a new Web App as a placeholder for the app that you are going to deploy. An earlier article in this series on Building and Deploying ReactJs apps gave detailed steps to do do. You can read it here > While creating the release pipeline, select the template of “Deploy Node.js app to Azure App Service”. Give the name Dev to the first stage. Add the artifact that is created by the build pipeline that you have created earlier. Select Latest as the default version to be deployed. In the tasks section, Select and Authorize your Azure subscription. Select the Azure Web App that you have created earlier in the App Service name parameter. In the Deploy Azure App Service task, ensure that the text box for “Generate web.config parameters …..” is blank, remove text if any in it. Also ensure that in the “Application and Configuration Settings” section “App Settings” text box is also blank. After you create a new release in this pipeline, you will get the deployed application. It now shows a custom UI instead of the placeholder that was added when the app was created. You can create similar stages in the release pipeline for Test, UAT, Prod etc. and set the tasks as appropriate. If you want to test the app, check the steps in an earlier article in the series. In this article, we stepped you through the creation of a React.JS app using TypeScript and then to do the build and deployment.
https://www.dotnetcurry.com/devops/1499/azure-devops-typescript-reactjs
CC-MAIN-2022-27
refinedweb
1,088
60.01
Archived:How to play a sound file play an audio file in Python and presents some of the options available. Preconditions Note: The method current_volume is not available for S60 1st Edition. Valid file formats must be used. Their availability may vary from one device generation to another and they tipically include WAV, AMR, MP3, AAC, RA. The optional parameter that is passed to the play() method of a Sound object represents the number of times the file will be played (1 by default). It can be an integer or the KMdaRepeatForever data item, which is for continous playback. Source code import appuifw, e32, audio app_lock = e32.Ao_lock() #Define the exit function def quit(): #Close the Sound object s.close() app_lock.signal() appuifw.app.exit_key_handler = quit file_path = u"C:\\Data\\Sounds\\Digital\\mysound.mp3" #The path of the file that is to be played #Open the sound file s = audio.Sound.open(file_path) #Play it indefinitely s.play(audio.KMdaRepeatForever) #Wait for the user to request the exit app_lock.wait() Postconditions The sound file is played for the specified number of times. Known issue Playing a Sound object in a function doesn't work because it goes out of scope when the function returns. If the Sound object goes out of scope, the sound stops playing. This will not work: import audio def play_sound(f): s = audio.Sound.open(f) s.play() play_sound(u"C:\\Data\\Sounds\\Digital\\mysound.mp3") The solution is to make the Sound object a global variable so that it stays in scope when the function returns. This will work: import audio def play_sound(f): global s s = audio.Sound.open(f) s.play() play_sound(u"C:\\Data\\Sounds\\Digital\\mysound.mp3") Additional information The Sound object method stop() is used to stop playback. The Sound object method duration() returns the file's duration in microseconds. The Sound object methods current_position() and set_position(microseconds) return and set the current buffer position, respectively. The Sound object methods current_volume(), set_volume(volume) and max_volume() return and set the current volume and return the maximum volume of the device, respectively. The Sound object method state() returns the current state of the object, which can be one of the following: - ENotReady The Sound object has been constructed but no audio file is open. - EOpen An audio file is open but no playing or recording operation is in progress. - EPlaying An audio file is playing. - ERecording An audio file is being recorded. There is currently no method that pauses the playback of an audio file. The solution is: - Use current_position() to find out the current position - Use stop() to stop the playback - Use set_position() to set the position to the one that was retrieved at step 1 - Use play() to resume playing the file as if it had been paused 23 Sep 2009 28 Sep 2009 S60V1 Can play mp3?? Maaf saya tak terlalu mahir dengan bahasa inggris tapi saya mempunyai pertanyaan.Kenapa n-gage saya tidak dapat membuka mp3 menggunakan script tersebut??Dan muncul error "Not ready"
http://developer.nokia.com/community/wiki/How_to_play_a_sound_file
CC-MAIN-2014-10
refinedweb
506
56.55
I want to write a Mixin who can extend eventual parent class atribute instead replace it. Example: class Base(object): my_attr = ['foo', 'bar'] class MyMixin(object): my_attr = ['baz'] class MyClass(MyMixin, Base): pass print(MyClass.my_attr) ['baz'] MyMixin MyClass.my_attr ['foo', 'bar', 'baz'] MyClass If you really want to do this on class attributes with a mixin, you'll have to use a custom metaclass AND put the mixin after Base in the mro (which means it won't override anything in the base class, just add to it): class Base(object): my_attr = ['foo', 'bar'] class MyMixinBase(type): def __init__(cls, name, parents, attribs): my_attr = getattr(cls, "my_attr", None) if my_attr is None: cls.my_attr = ["baz"] elif "baz" not in my_attr: my_attr.append("baz") class MyMixin(object): __metaclass__ = MyMixinBase class MyClass(Base, MyMixin): pass print(MyClass.my_attr) which makes for quite convoluted code... At this point, just using MyMixinBase as metaclass for MyClass would yield the same result: class MyOtherClass(Base): __metaclass__ = MyMixinBase print(MyOtherClass.my_attr) and for such a use case, a class decorator works just fine: def with_baz(cls): my_attr = getattr(cls, "my_attr", None) if my_attr is None: cls.my_attr = ["baz"] elif "baz" not in my_attr: my_attr.append("baz") return cls @with_baz class AnotherClass(Base): pass print(AnotherClass.my_attr)
https://codedump.io/share/UhOpiv8vZ3WJ/1/extend-class-attribute-with-mixin
CC-MAIN-2017-39
refinedweb
211
54.73
Home >> solar led road light manufacture 180lm/w 180w led street light ,led street light with CE ROHS approval 80w 360d e27 e40 led bulb light ( alluminum pcb built in fin heat sink 12s ) , 3 years warranty, ce rohs approval. 80w e40 led corn bulb light is a new type e40 led street light, its luminous flux is 9800lm high brightness. adopt 800pcs smd2835 china led street light (12w 400w), led flood lights(1w 1500w), led high bay lights(20w 1500w), offered by china manufacturer & supplier zhongshan yaye lighting co., ltd., page6. menu sign in. join free for buyer. search products & suppliers manufacturer/factory, trading company led street light, led street lamp, led road light manufacturer / supplier in china, offering 2019 outdoor cheap adjustabel solar 240w led street lamp with tuv ce& rohs saa approval, new popular high brightness die casting aluminum 120w led light for garden, 200lm/w high power led indoor lighting fixture ip65 ufo led highbay light and so on. china led high mast light suppliers. products. products; suppliers; high lamp lumen 11200lm ip66 80w all in one led solar street light road solar outdoor led lighting. exporter of led high mast light from china (mainland) 60w led high bay light, 60w ip65 ce rohs fcc pse ul dlc approval. led high bay light 60w,ufo led high bay light 60w china led lights manufacturers directory. trade platform for china led lights manufacturers, and global led lights buyers provided by made in china clear history office ce rohs 14w 20w 30w round led ceiling light, led tri proof light. ip66 waterproof lighting fixture 3ft 6ft 7ft led linear light, vapor tight light, led tri proof light peony lighting technology (shanghai) co., ltd., experts in manufacturing and exporting ufo led highbay light, led street light and 1822 more products. a verified cn gold supplier on . import china e40 led street lighting from various high quality chinese e40 led street lighting suppliers & manufacturers on globalsources . we use cookies to give you the best possible experience on our website. for more details including how to change your cookie settings, china led flood lights(1w 1500w) catalog of yaye 18 hot sell ce & rohs approval 90w/100w/120w/150w cob led flood light/led tunnel light ip65, light lamp /led floodlight provided by china manufacturer zhongshan yaye lighting 120w led street light, 150w led street light, 180w led street light manufacturer / supplier in china, offering 140lm/w 15kv ip65 150w led street light, st64 8w dimmable filament bulb, ip65 20w hotel led waterproof ceiling night light with mp3 and so on. 50w led road street flood light garden spot lamp outdoor yard ip65 waterproof us. $39.70 .. outdoor solar lights motion sensor wall light waterproof garden yard lamp 8 led. $6.87. $13.74 .. certification ce approved & rohs compliant. package include 1pcs ac110v 30w led flood light . sm06 led street light. tonyalight s led street light fixtures bring roadway and parking area illumination into the 21st century. our patented optics maximize light distribution and placement with exceptional, low glare illumination on the intended area, while minimizing light 50w led road street light industrial outdoor garden yard spot floodlight lamp. $18.99 150 led solar power flood spot light outdoor garden yard security lamp lights us. $11.37. $32.49. free shipping .. ce and rohs cable yellow/green wire to ground model number ussl180witaok (5 unit minimum order quantity) 180 watt, 22,500 23,400 lumens street parking lot light. design light consortium (dlc) approval for rebates. ul certified for quality. super slim design. constructed using anodized aluminum for the housing with a polycarbonate lens. this beauty is perfect for lighting roadways and parking lots. professional wholesale led light supplier product high lumen ,high brightness led lights,green energy saving led lights factory ,manufacturer in china home; products. led flood lights; led high bay lights 600w led flood lights ce rohs certificates 100lm/w anodized surface black shell 3 years warranty 20w smd3030 outdoor solar led street find led street lighting fixture manufacturers from china. import quality led street lighting fixture supplied by experienced manufacturers at global sources. we use cookies to give you the best possible experience on our website. find led street lights manufacturers, led street lights suppliers & wholesalers of led street lights from china, hong kong, usa & led street lights products from india at tradekey page 8 led street light with ce&rohs certificates (lf led 60w) min order 100 piece solar led. solar led solar street lights are well designed to illuminate large areas with the highest intensity of light. our solar led street lights has a long life span, low power consumption, high efficiency, good coverage and high brightness. our lights are selling all over the world europe, africa, america . amperafrik 8 rue de l est, boulogne 92100, france sep 24, 2019 zhongshan samvol lighting co, . ltd manufacutured in led lighting, mainly for led flood light, led street light and led highbay light and other lamp modules. we have more than 200 employees, more than 10 in quality, and 10 in technics and 4 in aftersale service, 15 in sales. and more than 150 in production. led lighting solar street light. competitive led street light products from various china led street light manufacturers and led street light suppliers are listed below, please view them and select the most helpful info for you. besides, we also provide you with motorcycle related products such as for your choice. street light, outdoor lighting, led road light manufacturer / supplier in china, offering model style enec approval led street lighting, public lighting 1 10v/pwm dimmable ip65 100w city led street light, high efficiency 130lm/w led flood light 30w 200w and so on. high power led street light, you can buy good quality high power led street light , we are high power led street light distributor & high power led street light manufacturer from china market. china 80/100/120/w led street lamp waterproof. led street light, 2019 solar street light, energy saving light manufacturer / supplier in china, offering 80/100/120/w led street lamp waterproof solar led street light, energy saving high lumen ip65 waterproof outdoor led floodlight smd 10w 20w 30w 50w 100w 200w 300w led flood light, high quality slim ip65 smd 50w outdoor led flood light and so on. led street light, led high bay light, led flood light manufacturer / supplier in china, offering 280 w flood light led high mast light for soccer field, 70w 90w 120w enec led garden light park light, 25w to 400w ip66 chips led street light with ul dlc ce cb gs certificates and so on. led street light, led street lamp, led road light manufacturer / supplier in china, offering china manufacturer wholesale octagon street light plasma street light with enec cb listed, 2019 professional stadium sports field ip66 30w outdoor led flood light, energy saving high lumen ip65 waterproof outdoor smd 50w led flood light and so on. company description founded in 2008, ningbo shishang photoelectricity technology co., ltd. rapidly grew as one of the most professional led lighting manufacturers in china. with 10,000 square meters factory working area, more than 50 skilled works, and 10 talented engineers, our monthly production capacity is reaching 30,000 pcs with mixing items. led street lights manufacturers & suppliers wide range of led street lights choose verified led street lights manufacturers .. 50w 100w 150w led road street flood light garden lamp hea 6 . us$ 60 85 / piece .. 180w led solar streetlight, ce, rohs certified, dc 12 to 2 . 50 pieces (min. order) fixture outdoor 150w led street light. high quality aluminum,integration of high grade chip,perfect process.explosion proof glass,high light penetration rubbing,impact resistant.the integration of the radiator,perfect flow design,long lifetime.high quality chip,shine brighter,more stable and longer lieftime than normal.all kinds of situation are available, such as road, park, highway, hotel luohe hilight technology co., ltd., experts in manufacturing and exporting led bulbs, led tubes and 1378 more products. a verified cn gold supplier on . guangdong tuolong technology lighting co., ltd is authorized by hong kong tuolong in the china mainland, taking up a series of led productions, research, development, sale and service as one of the high tech enterprise .. yaye 18 hot sell ce/rohs 30w 50w 60w 70w 80w 100w 120w 150w 160w 180w 200w 240w 250w 300w 400w cob smdsolar led street shenzhen riyueguanghua technology co., limited is best solar powered led street light, smd led street light and aluminum led street light supplier, we has good quality products & service from china. professional wholesale led light supplier product high lumen ,high brightness led lights,green energy saving led lights factory ,manufacturer in china china 2019 new style adjustble led street light with., waterproof ip66 100 300w adjustable led shoebox area parking lot street light for outdoor square highway main road sidewalk sinozoc is a chinese mainland led luminaire designer, manufacturer and energy saving consulting company .. led area lighting and road lighting. led street light, flood light, solar light, garden lights, etc. industrial and warehouse lighting. led high bay lights for warehouse, factories, workshops, etc. certificates. ul, ce, atex, iso9001
http://arllen.cz/1757/solar-led-road-light-manufacture-180lm-w-180w-led-street-light-led-street-light-with-ce-rohs-approval.html
CC-MAIN-2020-45
refinedweb
1,530
60.85
Swing. lcdui, NetBeans IDE 6.5 Beta has been released. NetBeans 6.5 also contains 6.1 features including a powerful and intuitive JavaScript Editor, support for Spring Framework, ClearCase (via Update Center), and RESTful Web Services. Download the beta, learn more, or check out the documentation. The GlassFish Community Acceptance Testing (FishCAT) program has gotten underway. " The main goal of this program is to provide opportunity to community to significantly influence the quality of the GlassFish as well as to get early feedback on stability and usability through community involvement in GlassFish Beta testing cycle. We will start our first FishCAT program for GlassFish v3 Prelude from August to September 2008. It will last about 4 weeks." More details are available in Arun Gupta's blog. Binod uses his blog to announce the release of SailFin V1 Alpha, a JSR 289 (SIP Servlet 1.1) implementation atop GlassFish. Linking to the download, he notes, "though a major chunk of JSR 289 is already implemented, it is still not yet compliant with the final release of the specification." hope it's OK to use this old thread for my question as it's related to the topic: How can my LWUIT application detect whether the device it's running on has a touchscreen or not? Using normal LCDUI classes, the Canvas class offers methods like hasPointerEvents, but I can't find a similar method in the LWUIT framework. Does such a method exist or is it at least planned to be added?" joergjahnke danila explains CLDC threading implications in Re: javacall and MT safe. "CLDC VM uses green threads, so all Java threads are created and managed by the VM within a single OS thread. So the VM calls PCSL routines from the same thread. An exception to this rule is hybrid thread coding style that allows to use additional native OS threads to handle blocking calls." danila Finally, Kenneth Clark asks about Shared XSD for WSDL. "I am sitting with a situation where by I have a stack of web services. Now to make them more manageable I can divide them into smaller service sets. The only problem is that they share the same objects. Now here is where the problem comes in, the front end defines the shared objects in different namespaces which means there are duplicate objects. So is there any way to have the WSDL share the Entity definitions? Do I have to manually write the WSDL in order to achieve this and use the @WebService(wsdlL. And LWUIT was open sourced today: See -- Terrence Posted by: terrencebarr on August 14, 2008 at 08:51 PM
http://weblogs.java.net/blog/editors/archives/2008/08/its_not_peculia.html
crawl-002
refinedweb
440
64.41
Hi, I'm using Red Hat JBoss Developer Studio 11.1.0.GA on Ubuntu 16.04. The following sequence: @ApplicationPath("/services") public class CustomerManagementApp extends Application { } Raises the following Eclipse error: The Application Path should be configured with an @ApplicationPath annotation or in the web deployment descriptor. For some reason the validator "thinks" that the annotation is not present. Of course, compiling the same code outside Eclipse doesn't raise any compilation error and the code works as expected. Doing a maven-update or project-clean against the project will sometimes clean the error but as soon as I edit the code, even id I add only an empty line, the error appears again. Is that anything wrong here ? Please notice that I'm looking for a real solution so don't suggest to disable the JAX-RS validations. Kind regards, Nicolas Please open a JIRA to report this issue.
https://developer.jboss.org/message/977988?tstart=0
CC-MAIN-2018-30
refinedweb
151
55.84
Introduction This article is a continuation of a series of articles describing the often forgotten about methods of the Java language's base Object class. Below are the methods of the base Java Object present in all Java objects due to the implicit inheritance of Object along with links to each article of this series. The focus of this article is the getClass() method, which is used to access metadata about the class of the object you are working with. The getClass() Method The somewhat confusing or misunderstood Object method getClass() returns an instance of the Class class, which contains information about the class that getClass() was called from. Whew, if you're not confused already by that last statement good for you, because I am and I wrote it! Let me try to unpack that sentence with a demonstration of how this might be used. Below you will find the Person class I used in the initial article on the Object class's toString() method. package com.adammcquistan.object; import java.time.LocalDate; public class Person { private String firstName; private String lastName; private LocalDate dob; public Person(String firstName, String lastName, LocalDate dob) { this.firstName = firstName; this.lastName = lastName; this.dob = dob; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public LocalDate getDob() { return dob; } public void setDob(LocalDate dob) { this.dob = dob; } @Override public String toString() { return "<Person:"; } } Let's focus in on the overridden toString() method, which lists the name of the class, Person, along with the values of the instance's fields. Instead of "hard-coding" the name of the class, Person, in the string itself I could have actually used the getClass() method to return an instance of the Class class, which will contain that info and allow me to use it like so: public class Person { // omitting everyting else remaining the same @Override public String toString() { Class c = getClass(); return "<" + c.getName() + ":"; } } This would lead to replacing the original hard-coded "Person" text with the fully qualified class name of "com.adammcquistan.object.Person". The Class class is packed full of different methods that allow you to identify all kinds of things about the class object that getClass() was called on. For example, if I wanted to get a more simplified toString() representation of my Person class I could simply swap out the c.getName() call with c.getSimpleName() like shown below. This in turn would return "Person" instead of the fully qualified class name "com.adammcquistan.object.Person". public class Person { // omitting everyting else remaining the same @Override public String toString() { Class c = getClass(); return "<" + c.getSimpleName() + ":"; } } A major difference in the semantics of how getClass() is used in comparison to the other Object methods is that getClass() cannot be overriden because it is declared as a final method. What is the Class object Good For? At this point you may be asking yourself, "Ok I guess it is pretty cool that I can get information about a class by calling getClass() and retrieving its Class object representation, but how is this useful to me as a programmer?". Believe me, I've asked myself this question as well and my general conclusion has been... it's not. At least it is not really from an everyday programmer's perspective. However, if you happen to be a library or framework developer then you are likely to get very familiar with the information and behavior of Class objects because it is essential for the concept known as reflection. Reflection allows for two primary things: (i) runtime investigation of objects and their contents and, (ii) dynamic access to fields and execution of methods during runtime. Item number one was already demonstrated above by using getClass() to get a runtime representation of the Person class to access either the fully qualified or simple class name in a modified version of the toString() method. The second item is a bit more involved to whip up an example for, but it is something that is quite helpful to be able to access metadata on a class. Some of the information that you can interrogate an instance of Class for are things like constructors, fields, and methods plus other things like inheritance hierarchies of a class, like it's super classes and interfaces. An example of this is the ability to use debuggers in an IDE like Eclipse and Netbeans to see the member's and their values in a class while the program is in execution. Take for example the following: public class Main { public static void main(String[] args) { Person me = new Person("Adam", "McQuistan", LocalDate.parse("1987-09-23")); Class c = me.getClass(); for (Field f : c.getDeclaredFields()) { f.setAccessible(true); try { System.out.println(f.getName() + " = " + f.get(me)); } catch (Exception e) { e.printStackTrace(); } } } } Would output the following: firstName = Adam lastName = McQuistan dob = 1987-09-23 Again, no non-masochist would probably ever do this in regular everyday programming, but you will see this type of thing done often in frameworks. Conclusion In this article I described the meaning and use of the mysterious getClass() method of the Java Object class. I have shown how it can be used to get metadata on a class instance such as the name of the class of an object at runtime as well as provided an explanation of why accessing a Class instance may be useful. As always, thanks for reading and don't be shy about commenting or critiquing below.
https://stackabuse.com/javas-object-methods-getclass/
CC-MAIN-2019-43
refinedweb
930
50.87
Your Account by Kurt Cagle There's no such thing as HTML 4.3. The last HTML specification was HTML 4.01. > singleton attributes with no corresponding value "Singleton" (minimised) attributes have a value. They *only* have a value - the difference between them and normal attributes is that they don't have a *name*. > the lower case preferred notation of XHTML Lowercase isn't "preferred", it's mandatory. > It still becomes the responsibility of browser creators to update their DTDs and conformance engines What on earth are you talking about? Browsers don't have anything remotely resembling "conformance engines". > Namespaces should not necessarily be unique to XML - there's nothing in fact in the original 1.0 specification that limits them from being applied to an HTML5 The original specification is called "Namespaces in XML", continually refers to the documents being XML documents, and relies on the XML 1.0 specification for various definitions. You could argue that the *design* doesn't preclude an HTML 5 application, but the *specification* does. > There's no such thing as HTML 4.3. The last HTML specification was HTML 4.01. Yup - I've been in XHTML too long. Thanks for the correction. The lower-case usage in XHTML is mandatory, in XML it is generally preferred over upper case. When I wrote this originally I had XML rather than XHTML there, then corrected the term for clarification but not the adjective. >What on earth are you talking about? Browsers don't have anything remotely resembling "conformance engines". Actually most browsers do have the ability to determine whether an HTML document is valid or not, most just don't expose that functionality to the user. I know directly that both Mozilla and IE7 do, and I believe that Opera probably does. I don't know about Safari or Konqueror. I don't think there's any exclusionary language specifically targeting the HTML specification here. Yes, namespaces are XML artifacts, perhaps one of the key XML artifacts. I'll accept your assessment on the design vs. specification, but realistically what I see is that in order for there to be some form of reconciliation between HTML and XHTML, redefining namespaces so that they recognize non XML-structures becomes necessary. There's no real way around this that I can see. It sounds like you are confusing validity with well-formedness. I know that Mozilla uses expat, which is a non-validating parser. While I would welcome umbrella namespaces, even one namespace is probably too much for the average population which can barely just type out " I suspect that most xml users would cringe to hear me suggest this, but I would really like to see default namespaces associated with (please don't cringe!) filename extensions. Yes, I know that this is completely inconsistent with hte way xml works (ie- xml data need not even have a 1-1 relationship with a file), but it is really consistent with the way xml is used. And furthermore it is consistent with the way most files are used (ie- if the filename ends with .jpg, I know what type of data is in it). Namespaces are HARD. They represent partitions of a document object model, their verbosity makes them hard to remember, the rules for governing mixed namespace manipulation is confusing and cumbersome and they are an endless source of confusion and error. For all that, namespaces have a very definite, very important role, and any solution that attempts to eliminate namespaces inevitably ends up becoming unmanageable unless the domain involved is VERY small and the rules are very clear. It's one of the biggest uglinesses about XML, but from what I've seen, any other alternative in the end tends to differ only in the syntax used to represent that partitioning. On the other hand, I think that there is a second factor coming into play here. Fewer and fewer sites are actually accepting "live" XML anymore - they use BBS notation, or rich text editors, or WYSIWYG applications in order to build their web pages. The ones that are building these applications, however, should be familiar with namespaces, because they represent to XML what class libraries or packages represent to the typical Java or C++ developer - a means of organizing coherent and connected information under a single interface, a means of differentiating between namespace collisions when necessary and a way of packaging this content for distribution. That's why I'm always dubious about what I call the Aunt Millie's argument "HTML must be so simple that your Aunt Millie could write it". Nope ... unless Aunt Millie is a web designer and developer, she should have no reason to write naked HTML. She should use a WYSIWYG package that will let her do what she wants in a nice pretty fashion, and that TOOL should be completely conversant with namespaces. If its not, that represents poor design and laziness on the part of tool designers. I was not suggesting eliminating namespaces (at least in my previous post, although I perhaps could see myself doing so), but rather associating default namespaces with file extensions so that the user would not have to type out the declaration in certain cases. This get around the Aunt Millie thing without losing namespaces. And for all the computer generated stuff, full namespaces suffice. For xslt, you still need to declare the resultant namespace, but the xslt gang are big boys and girls, they can handle it. The thing is, however, I secretly really do want to banish the over engineered namespace. I mean, come on, how many different xml dialogs of global importance have we seen in the last few years? Saying 'hundreds' would be generous. You might argue that in the future we would see millions more as more users start using xml, and we will only contrain ourselves without namespaces.... Yet in the real world the simple filespace has been more than adequite in differentiating every file type that has ever been made, I've never seen and confusion between jpg and doc, nor have I ever seen two individuals battle it out because they both wanted the ipx extension (now I suppose someone will find an example of this to prove me wrong). Of course there are probably millions of minor xml dialogs that are used in different projects, much like there are probably millions of proprietary small binary formats with individual file extensions, which also haven't derailed the computer industry. The other thing I disagree with is that xml can be complicated because eventaully we all would like it to be generated by software. First of all, this never happens.... XSLT was supposed to be generated by a more human readable script, but these days pretty much everyone writes XSLT directly. A few years back, the "XML editor" was the hot thing, but these days one of the most popular xml editors is emacs (with nxml). I even fell for this one back a couple of years ago when I set up SOAP, thought that the complicated internals really didn't matter as long as the tools took care of everything. Guess what- it turned out to be a leaky abstraction, and I ended up spending hours debugging by staring at complicated ethereal data, or trying to figure out that the problem I had now was due to some strange wsdl issue send a while ago. We finally got rid of the whole thing for a managable REST solution. OK, I oculd go on, but that is all for now. I don't think we're that far apart, though let me respond to your comments. The problem with filename extensions (which are of course themselves a form of namespace prefix - or suffix in this case) is that they favor the early adopters. For instance, suppose that I'm working on a document. In this case "doc" is a perfectly reasonably extension, right? However, Microsoft grabbed that one early, and if I put a doc extension on the end of my file, then unless I get deep into the bowels of the OS, chances are really good that Windows will read your text file as a(n extremely) corrupted Microsoft Word file. The second problem with the file prefix is that it in fact assumes that you're dealing with a file - but what if I have a stream of content generated from a process coming off the web. Chances are the "extension" in this case will be .asp or .php or .jsp or ... you get the idea. These COULD be generating html content (and in probably 90% of the cases would be), but they could also be generating everything else. So this is where mime-types step in, right? Well, sort of. The problem with mime-types and content-types is that someone still needs to standardize on them, in most cases the IETF. It can take months or years (or decades) for such a mime-type to be registered, and until then you just end up hoping that someone else doesn't choose the mime-type extension and make your software break. The problem I've found in general is that there are CONSTANT struggles on filename prefixes, just like there are constant struggles over good domain names. JPG has no obvious competition because the letters do not have any immediate semantic associations. DOC, on the other hand, has a huge semantic association, and there are any number of other word processing vendors that would have LOVED to use DOC rather than ODF, ODT, SXW and so forth. Now, while the web is "slowly" waking up to XML, XML has been used internally by companies for expressing objects for quite some time, and it makes sense to apply your own taxonomy to that namespace - after all, it's meant to solve a local problem. However, there are currently hundreds (if not thousands) of invoice specifications out there, most of which are subtly or in many cases dramatically different. XML presents a real problem there as well, without the concept of namespace. If I create a suffix to my file that is unique to me, how will programs know what to do with a file of that suffix? If the name is uncommon, you can manually make a change to the operating system to override the "default" handler for that namespace, but this also means that there's no clean way to identify that the document, in addition to being an invoice, is also an XML instance. Issues like this crop up all the time when dealing with differing interactive domains (which is what XML is ultimately all about). That's a big part of the reason why namespaces of some sort are necessary, and why I really don't see them disappearing from XML anytime soon. On the "XML is machine generated" assertion, I'll still stand by what I say for the relevant population. A programmer is not Aunt Mildred (unless Aunt Mildred also programs Unix boxes in her spare time, of course). Chances are that if you're a sufficiently capable programmer to write your SOAP and WSDL documents by hand, then you know what you need to know about namespaces. Most Aunt Mildreds can't write HTML at all, save perhaps the very occasional <i> or <b> tags. They aren't trained to think in markup terms, they don't understand the rules, and while they may be domain experts, they aren't domain experts in XML or HTML ... and THEY will almost certainly use a WYSIWYG HTML editor to create content, most often not even really realizing what they're doing. Personally, I tend to think that this argument is typically used by lazy programmers (not putting you in this category, mind you - you've obviously thought through these issues) who find working with XML in any form beyond tags in strings to be too confusing and demanding. They seem to not understand why: doc = "<html><body><h1>"+myText; doc += "</h1><p>" + "Here's some <b>text</b></p>"; doc += "<div>"+myFunc(text)+"</div>"; doc += "</body></html>"; is such an incredibly bad way of dealing with either HTML or XML markup. Now, does that mean I'm a purist with regard to namespaces? No - they CAN be a pain in the butt to deal with, especially when working with compound documents, they make things like XPath a real nightmare, and they do contribute to the slower adoption rate of XML compared to other languages. I'm not saying they are panaceas. I just think that most other solutions are worse. The reality is that XML is hard - declarative programming in general is harder than imperative, because it involves the ability to work with abstracts at a level that most programmers normally don't venture. A good IDE can help (and can turn namespaces from limitations into assets) but even some of the best are only just reaching a level of sophistication that most programmers have expected of their imperative IDEs for years. © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/post/restarting_the_html_engine.html
CC-MAIN-2017-39
refinedweb
2,211
58.72
Sc “Hello, world” variety. A simple “Hello, world” example My first Akka Actor example is about as simple as I can make it: import akka.actor.Actor import akka.actor.ActorSystem import akka.actor.Props class HelloActor extends Actor { def receive = { case "hello" => println("hello back at you") case _ => println("huh?") } } object Main extends App { val system = ActorSystem("HelloSystem") // default Actor constructor val helloActor = system.actorOf(Props[HelloActor], name = "helloactor") helloActor ! "hello" helloActor ! "buenos dias" } Here’s a quick description of this example: - Import all the Akka actor classes you need. - Define an Actor, defining behavior in the special “receive” method. - Create a “main” object where you can test things. - You need an ActorSystemto get things started, so create one, with an arbitrary string. - You create an Actor instance with actorOf, and that line shows the syntax for an Actor whose default constructor takes no arguments. (More on this shortly.) - Now that we have an instance of an actor, we send it two messages. Assuming (a) you're working on a Unix/Linux system and (b) you have Scala and SBT installed, you can run this test class as follows. First, create a directory named “Hello,” “HelloTest,” or something similar, and then cd into that directory. Next, create a build.sbt file like this in that directory: name := "Hello Test #1" version := "1.0" scalaVersion := "2.10.0" resolvers += "Typesafe Repository" at "" libraryDependencies += "com.typesafe.akka" % "akka-actor_2.10" % "2.2-M1" Then paste the Scala source code above into a file named Hello.scala, also in that same directory. After you’ve done that, run this sbt command: $ sbt run After sbt downloads all the dependencies it needs, you should see this output: [info] Loading global plugins from /Users/Al/.sbt/plugins [info] Set current project to AkkaHelloWorld (in build file:/Users/Al/Projects/Scala/Tests/Scala2.10/AkkaHelloWorld/) [info] Running Main hello back at you huh? ^C If you’re using Scala 2.10.0 and SBT 0.12 or newer, that should work. The program will sit there and run forever, so as shown, press Ctrl- C when you’re ready to stop it and move on to the next example. Creating an Akka Actor with a class constructor that takes arguments As you saw in the example above, if you want to create an actor whose default constructor takes no arguments, you use this syntax: // create an actor with the "actorOf(Props[TYPE])" syntax val helloActor = system.actorOf(Props[HelloActor], name = "helloactor") However, if your actor’s default constructor takes an argument, such as a String, you create your actor a little differently, using this syntax: val helloActor = system.actorOf(Props(new HelloActor("Fred")), name = "helloactor") This is still pretty simple, you just need to know this syntax. If you want to test this, here’s the “Hello, world” example with a modified constructor to demonstrate this syntax: import akka.actor._ // (1) changed the constructor here class HelloActor(myName: String) extends Actor { def receive = { // (2) changed these println statements case "hello" => println("hello from %s".format(myName)) case _ => println("'huh?', said %s".format(myName)) } } object Main extends App { val system = ActorSystem("HelloSystem") // (3) changed this line of code val helloActor = system.actorOf(Props(new HelloActor("Fred")), name = "helloactor") helloActor ! "hello" helloActor ! "buenos dias" } If you follow the steps listed above, you can run this example, and you should see output like this: $ sbt run [info] Loading global plugins from /Users/Al/.sbt/plugins [info] Set current project to AkkaHelloWorld (in build file:/Users/Al/Projects/Scala/Tests/Scala2.10/AkkaHelloWorld/) [info] Compiling 1 Scala source to /Users/Al/Projects/Scala/Tests/Scala2.10/AkkaHelloWorld/target/scala-2.10/classes... [info] Running Main hello from Fred 'huh?', said Fred ^C More Scala Akka Actor examples, coming soon As I get comfortable with the overall Akka system I’ll share some more Akka Actor examples, but until then, reporting live from sunny (and windy) Boulder, Colorado, this is Alvin Alexander. Good start Nice explanation. It would be nice to understand why a default constructor takes the string name in the first place (even though I looked it in other site, it would be more complete to explain every detail for a newbie like me) Keep up! Add new comment
https://alvinalexander.com/scala/simple-scala-akka-actor-examples-hello-world-actors
CC-MAIN-2017-43
refinedweb
714
57.47
See also: IRC log <trackbot> Date: 28 October 2009 <mischat> zakim, +0162272aabb is me <mischat> i have done it 3 times :( <DKA> Scribe: Dan <DKA> ScribeNick: DKA Harry: any comments on the agenda? <hhalpin> PROPOSED: to approve SWXG WG Weekly -- 21st October 2009 as a true record <hhalpin> +1 <hhalpin> RESOLVED: SWXG WG Weekly -- 21st October 2009 are a true record <hhalpin> PROPOSED: to meet again Wed. November 11th (Note that we are skiping the meeting of Nov. 4th due to conflict with TPAC) Harry: next meeting will be Nov 11 - since we will skip next week due to TPAC. <hhalpin> RESOLVED: Wed. November 11th Dan: Should we have dial-in facilities at TPAC? Harry: we need to reserve this now if we want it... <tinkster1> Probably not Harry: would anyone dial in if they are off-site? <hhalpin> If anyone wants to, they can. Harry: I will send an email out to the list - if we will get more than 3 people then I will request it. ... I will include the agenda in the mail I send. ... Any brief updates on TPAC or social web camp? <bblfish> <bblfish> There are 68 people signed up Dan: update on the camp - the last I saw there were 60 people singed up - ... ... Can people please send that around / facebook / twitter / identi.ca / whatever it... use sky-writing, etc... <bblfish> Sun is paying $10 for lunch <hhalpin> +1 <CaptSolo> Zakim: aadd is me <hhalpin> David Morin from Facebook. Harry: Other good news on TPAC - we have a number of observers coming in... <mischat> there is an echo Harry: Any updates on actions? <hhalpin> ACTION: [CONTINUES] tinkster to summarize Evan's talk [recorded in] <tinkster1> I've posted that to the list already. <hhalpin> ACTION: [CONTINUES] DKA to summarize OSLO and geoLocation conversation in order to spread knowledge of these efforts among W3C members. [recorded in] <rreck> tinkster did you get my email? <mischat> hehe, i have another couple of actions Harry: Giant congratulations to mischat for finishing his action. <hhalpin> ACTION: [CONTINUES] Mischa to describe/implement a report of terms and conditions, and how they change between now and the end of the XG. [recorded in] <tinkster1> <tinkster1> Yes. <bblfish> there is an echo <rreck> i think is almost done <tinkster1> I think it was changed to DONE last week. <rreck> mine <mischat> harry is echoing <bblfish> perhaps everyone is <rreck> echo is based on noise canceling <tinkster1> everyone seems to be. <mischat> yay better <hhalpin> ACTION: [CONTINUES] Adam to write up Matt Lee's talk [recorded in] <hhalpin> ACTION: [CONTINUES] mtuffied to put up wiki page about social networks deploying these technologies. [recorded in] <hhalpin> ACTION:oshani to write up David Recordon and Luke Shepart talk. [recorded in] Harry: Ted will be talking about applying social tech to w3.org... <hhalpin> ACTION: [CONTINUES] cperey to book global lockbox as an invited speaker [recorded in] <mischat> i have no joy with BBC persia <hhalpin> ACTION: [CONTINUES] mtiffiel to invite BBC Persia people to talk about their use of social media [recorded in] <mischat> nope will try calling them again <mischat> will hassle <mischat> for another week <mischat> :) <jsmarr> hi i'm on the call <hhalpin> ACTION: [CONTINUES] petef to look into activitystreams invite, maybe Chris Messina [recorded in] Harry: we can talk user stories and final report next week [at TPAC]. <hhalpin> Will talk about this next week at TPAC. <hhalpin> ACTION: [CONTINUES] adam to write up the boeing use case for enterprise social networks [recorded in] <hhalpin> ACTION: [CONTINUES] rreck to flesh out anonymous usecase connecting to multiple identies and null provenance [recorded in] <hhalpin> ACTION: [CONTINUES] oshani to reframe the geolocation/intent/portability [recorded in] <hhalpin> ACTION: [CONTINUES] bblfish to relabel data protection use case to be about controlled access and takedown to data "about" you [recorded in] <rreck> mine is almost done <hhalpin> ACTION: [CONTINUES] bblfish to merge Family and Group access usecases [recorded in] <rreck> i promise <hhalpin> ACTION: [CONTINUES] hhalpin explain to henry and oshani doc editing process for usecases [recorded in] <bblfish> yes, I have not had a lot of time to do much, on my use cases as I have been quite busy organising <hhalpin> <hhalpin> <hhalpin> Joseph Smarr: I'm CTO at Plaxo - Plaxo's about contact management, keeping together sources of contact data... <hhalpin> Joseph: Lately I've been involved with the so-called open stack. This is also strategic for Plaxo. ... A lot of these technologies are making progress - but everything on the Web wants to add a social component. In order to get that people data ... reconnecting to those people you know is daunting. Most sites are scraping your data using your username/password. ... until now the tools weren't available to do anything else. Every single site has built their own contacts protocol / db / etc... <hhalpin> good point re vendor-specific APIs over scraping Joseph: genesis of portable contacts came out of [trying to solve this problem]. Because even APIs when they are proprietary are not solving the problem. <rreck> ha! Joseph: someone else building contact API should be able to use them... With portable contacts there should be a network effect - ... so we get away from this vendor-by-vendor integration. This is more how the Web should work. ... portablecontacts.net <rreck> simple sounds like a good idea Joseph: getting adoption is the hardest thing - the best way to get adoption is to make the protocol simple ... minimal tool-chain complexity... ... it's not just having a standard schema for contacts, but specifying how you authenticate, etc... end-to-end. But each piece is "not invented" as possible. ... Schema is taken from v-card, etc... ... the discovery protocol is XRD. The auth protocols are http basic or oauth. Simple REST-style [API]. <hhalpin> :) pausing for breath! Joseph: that's basically where things are at. It's a draft spec for a year. Plaxo, Google, MS, [others?] have implemented... <hhalpin> Myspace has implemented, and part of OpenSocial. Joseph: have also got together with the open social folks - they have specfified REST-ful protocols for contacts. They did something similar. We worked with them to make open social a subset of portable contacts. Adam: have you given thought how it might have value to the enterprise? <hhalpin> Enterprise deployment of PortableContacts... Joseph: portable contacts useful as itself as a useful protocol - so if you're building applications that share contact info [even within the enterprise] then I suggest using portable contacts format. ... if you can use portable contacts then you only have to build it once if you want to integrate with other social networks, etc... <Zakim> danbri, you wanted to ask how far schema extensions can be supported in the protocol joseph: the other thing we do is helping to educate market... <rreck> louder please Danbri: How far can the schema be extended without [breaking things]? <danbri> q: how far can schema be extended beforeyou go beyond what protocol can understand? what's the plan for navigating that tension? <mischat> :) Joseph: we like the html model of extensibility - if you want to extend then you can and we'll ignore it if we don't understand it. <rreck> +q Joseph: within each field there is a type value and each in each type values there are canonically specified values but you can use your own as well... the auth protocols are also extensible. <hhalpin> So we have examples of those extra fields. <hhalpin> Wondering if there's a wiki where these can be listed... Joseph: it's intentionally quiet around namespacing, extension mechanisms, etc... entire spec is intended to be extensible. ... Also - there's already been a lot of talk of using the schema in other contexts - e.g. webfinger. ... some work on an XML namespace is beginning. We could use some help and guidance on that. <danbri> <danbri> some discussion of xml ns there <danbri> grddl etc Harry: On that topic - it' would be interesting to see more alignment - between FOAF and portable contacts. Dan: What's the IPR policy of portable contacts, etc... Joseph: We are looking at open web foundation or something... We're not going to make a separate foundation. Most of the people who have contributed to the spec understand that it will go in that direction. Harry: [relating the vcard schema effort...] <tinkster1> Harry: [circular statements that have broken my fingers] <rreck> woman's voice cross talk Joseph: Apple is spearheading a card-dav effort - which is not using the XML schema. <danbri> in i'd love to see a distinction between PC core, and the opensocial stuff which seems less firm Joseph: there wasn't much in there [vcard xml schema] that was controversial. ... it would be good [to align these] ... if you are interested then please say so on the [ietf?] mailing list. <hhalpin> IETF VCard 4.0 list. Joseph: Needs to be linked up to JSON format. JSON can be serializable in modern languages like PHP, etc... so no need for parsing. Harry: it appears that portable contacts is a proper superset of vcard3. <danbri> (also re I18N of names ...) <hhalpin> 1-to-1 mapping above vCard 3.0 and modernized some spellings <hhalpin> :) Joseph: we modernized some of the spellings - adr to address. Didn't need to be as terse as older standards... <danbri> <hhalpin> ACTION: hhalpin to check out FOAF and PortableContacts mapping. [recorded in] <trackbot> Created ACTION-102 - Check out FOAF and PortableContacts mapping. [on Harry Halpin - due 2009-11-04]. Joseph: there is a converted between portable contacts and vcard... <danbri> re names Harry: re additions to vcard 3 spec like tags and photos - this was taken from empirical data about what people were using, right? Joseph: Yes. Harry: does Plaxo keep a record of what sites support tagging, etc... <danbri> +1 on both full field or broken out field or both <danbri> if both, there is assumption that one is derrived from the other Joseph: We used 2 sources for info - all the address book APIs. We used a "composite" field (for e.g. address) where you can store both a composite and broken down version. <hhalpin> chris messina has a blog post has a big spreadsheet on contacts APIs. <hhalpin> referred to OpenSocial (i.e. Kevin Marks) did the field study on social networking data (top 50) <hhalpin> I remember looking for a URI for that spreadsheet!! <hhalpin> Anyone have it to ping into IRC <hhalpin> I saw it once..thought it was a Google Doc. <DKA1> ScribeNick: DKA1 Ron: it would be useful to have uni-directional relationships. <danbri> "but relationships MUST have been bi-directionally confirmed" is interesting <danbri> assumes a b/g db Joseph: our goal was not to model the social graph, but we wanted to express uni- and bi-directional relathionships. ... we have tags and relationships. That's meant to be used for uni-directional relationships. In Plaxo when you connect to someone you can connect as family, friends or business. <rreck> i totally get what you are saying thanks Joseph: when you have bi-directional relationship then you have a special relationship. <Zakim> danbri, you wanted to ask about a PC core (that FOAF might adopt for base addressbook; then diverge on favourite-movie or trust vocab / group descriptions detail) Danbri: on bi-directional confirmation. Does this means there is an assumption of a backing database? <hhalpin> hmmm...just starting thinking of XFN/hCard mappings as well.... Joseph: the idea is that the semantics are relative to the provider of the contacts information. If you want to infer things like connection or relationship then you are trusting the site... <rreck> great observation Joseph: it doesn't [formalize] the provenance. ... if you think there's a compelling use then go build it and jump on the mailing list... Danbri: do you plan to make any distinction of the core of portable contacts and the stuff that's coming in from opensocial? ... your schema has some things like "looking for" and some wooly things from opensocial. Joseph: they're not formally separated but they are called out distinctly. I think the world is changing pretty rapidly [as regards what is "core" contact data]. <hhalpin> My guess is we are going to run over 10-15 minutes. <hhalpin> Is this OK with people? Joseph: on Facebook you care about a phone number but you also care about IM addresses another social information that people have put on [their profiles]. <Zakim> caribou, you wanted to ask about policies attached to data <rreck> +1 run over Danbri: how much naming stuff do you want to do? Describing family names properly for every culture / country.... ... nobody's done it yet - is portable contacts going to do this? Joseph: We try to be internationally savvy. If you have feedback about what else we could do please input it [on our mailing list]. In some cultures people have multiple names. But adding that complexity for everyone... <danbri> when we write our final report - maybe we should suggest some joint pc/w3c effort to evaluate the i18n aspects of naming and come up with a 'good enough for everyone' strategy? <hhalpin> +1 danbri Caribou: I understand it's nice to transfer data from one service to another - but if I fine-tune my privacy data in one site and then the data is transferred to another site then the policies are not transferred withthe data. <danbri> harry, can you bring that up, you've got better audio.... Joseph: Yes - the protocol is designed to only transfer the data, not the privacy metadata. <hhalpin> ok...will bring this up. <hhalpin> That's a place where the W3C is doing some work I think Joseph: in a fully-interoperable soclai web this would be handled. Needs to be explored further... <hhalpin> Carine - do you have a URI? <hhalpin> for policy language stuff? Joseph: There's a portable contacts google group - open to everyone. <rreck> privacy seems against social networking business models <hhalpin> <hhalpin> that's the w3c workshop on access. <hhalpin> OpenID uses URIs. <danbri> well, back to final report? i'd like us to agree that this base core (names stuff, addresses ... but not turnOns/turnOffs) deserves to be done once, well, in a way that works worldwide Henry: [the open stack] always seems a little more complicated then it should be. Seems to be "increasing the powers of silos". Seems like the simple way to identify people would be by URL. <caribou> hhalpin, the W3C PLING is looking at sticky policies <hhalpin> <danbri> ... and PortableContacts imho is the most credible recent effort to bring together addressbook and social network field sets; I hope w3c can add an 18n perspective and assist in getting input from rest of planet Henry: if you look at open social - the way people are identified the first four characters are split off - forcing all URLs to be URNs... It seems to force complexity. <hhalpin> Carine - would it be worthwhile to have PLING do a presentation after the Nov. workshop? <hhalpin> We are booked most of Nov, but Dec. is free for invited speakers. <hhalpin> I'd like to finish up with Invited speakers by Jan so we can move to final report mode... Joseph: to clarify - there is a unique identifier ID field for each contact. It's been a URN by convention but it's meant to be a local stable identifier. But in addition the contacts can have emails URLs, etc... <hhalpin> I was not clear how mature the policy language work was. Joseph: rather than it just being a URL you can explicitly break out the domain, username, etc... ... so flickr, twitter, etc... can be broken out.... to strongly identify a user across silos. ... and the fact that there is a format to use to export the data should be helping to move away from silos and towards interop. Harry: The task fo the SWXG - we're interested in seeing what areas the W3C could play a role [or not] in the wider social web ecosystem. You could imagine something very light-weight... For example, reviewing internationalization (Richard Ishida). ... having some sort of relationship between W3C and OpenWeb Foundation. ... we could recommend that the W3C look into standardizing things. Things like portable contacts, they are already standards, but could W3C have a role here? ... any opinions? <hhalpin> we'll have eran on I think in 2 weeks. <hhalpin> or 3 <hhalpin> expertise around internationalization and with others standards Joseph: I'm a big believer in standards and communities. Both as a structure for collaboration and to add gravitas. I think those are valuable suggestions. Last thing we want is a set of competing standards bodies. ... diversity is good. ... all parties should feel like their voices are heard... Harry: One issue facing w3c is the membership structure. This incubator group is open - more or less anyone can join - but that's not normal for w3c. Is that a barrier [to participation ... ] in w3c? ... we have OWF on the phone in 2 or three weeks and we are trying to figure out what they are doing with process... Joseph: Portable contacts has happened very organically. Mix of big companies, small companies, independents, etc... I understand why some of these bodies have rules about charter and IPR upfront, but nature of some of these efforts is light and bottom-up... ... [path to standardization] could move from light-weight process to more formalized standards body... ... to the extent that more process is necessary then absolutely. <danbri> ok here's a quick braindump re FOAF and Portable Contacts - <danbri> ... can take that to the list for discussion cc Joseph <hhalpin> 1 year and a half, pretty stable around 2009... <hhalpin> have had as updates. <hhalpin> or on the mailing list. <hhalpin> ACTION: hhalpin to ping issue richard ishida to join portable contacts list [recorded in] <trackbot> Created ACTION-103 - Ping issue richard ishida to join portable contacts list [on Harry Halpin - due 2009-11-04]. <hhalpin> yes <mischat> bye all <Adam> thanks joseph1 <bblfish> thanks joseph1 <danbri> If designing a form or database that will accept names from people with a variety of backgrounds, you should ask yourself whether you really need to have separate fields for given name and family name. <danbri> This will depend on what you need to do with the data, but obviously it will be simpler to just use the full name as the user provides it, where possible."""" <danbri> from <hhalpin> thanks alot jsmarr! <hhalpin> Meeting adjourned <jsmarr> thanks guys <jsmarr> btw, is this irc transcript available/archived on the web at all/ <danbri> jsmarr, when we do a call on widget apis for social i'll bug you to join us again ... interesting links with mobile web there... <hhalpin> We'll ping Ted Guild over looking at PortableContacts for w3.org when his time frees up. <jsmarr> ? <jsmarr> e.g. that i could link to for others to see the transcript <hhalpin> <hhalpin> trackbot, end meeting This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: Dan WARNING: No scribe lines found matching ScribeNick pattern: <Dan> ... Found ScribeNick: DKA Found ScribeNick: DKA1 ScribeNicks: DKA, DKA1 Default Present: DKA, Carine, +1.510.931.aaaa, Henry, +0162272aabb, MacTed, mischat, hhalpin, tinkster, rreck, +1.858.442.aacc, +03539149aadd, melvster Present: DKA Carine +1.510.931.aaaa Henry +0162272aabb MacTed mischat hhalpin tinkster rreck +1.858.442.aacc +03539149aadd melvster Found Date: 28 Oct 2009 Guessing minutes URL: People with action items: adam bblfish cperey dka explain hhalpin mischa mtiffiel mtuffied oshani petef rreck tinkster[End of scribe.perl diagnostic output]
http://www.w3.org/2009/10/28-swxg-minutes.html
CC-MAIN-2016-36
refinedweb
3,287
73.68
Running Bottle in Pythonista and another script also Sorry in advance for this stupid question. But is it possible to have bottle running in Pythonista and have another script run that is using requests? I am guessing not, but maybe there is a trick to do this. I know I can use safari to see results etc, but it's painful process. You might be able to run one in Editorial and the other in Pythonista. How would that be for crazy?!? @ccc, really?? It was that easy 🆒 A good reason for every one to have editorial also 😁 But again, thank you. This is great. I just got it running in Editorial and the url just returns a dict. In Pythonista, just ran import requests msg_url = '' def msg_get(url): x= requests.request('GET', url) print x.json() if __name__ == '__main__': msg_get(msg_url) And got my dict back. Super cool. Editorial timed out running the script, but I haven't tried for it not to time out yet. I am sure I will find it. Now 1:47am here, I have been here since 1pm. Time to have a sleep 😃 But I am really excited about this... I have yacc or yaci (yet another cool idea ) Requests is such a great API because it does wonderful complexity hiding and at the same time allows you to pack so many transforms into a single, short call. You could also write: import pprint, requests url = '' if __name__ == '__main__': pprint.pprint(requests.get(url).json()) Untested, but it seems to me if you present a ui, with a button action that runs your requestsapp, then start the server, you can run one inside the same app. I believe Cethric has an HTML testing app that does this. i did test this for launching the bottle script. instead of just run(host=....), use a threading.Thread # coding: utf-8 from bottle import route, run from threading import Thread import requests @route('/hello') def hello(): return "Hello World!" #run t=Thread(target=run,kwargs={'host':'localhost', 'port':8080, 'debug':True}) t.start() print requests.get('').content @JonB , with your Thread test, did it work? I got at 61 error connection refused. I closed all my browser safari Windows, I terminated editorial and re started Pythonista, just to be sure I didn't have something open all ready. I tried it both in Pythonista and editorial, got the same error. Any ideas? But if I get past this point, I will try running in a panel in Pythonista This is a timing thing... If you insert a time.sleep(1) then you will get a connection. Flask lets you run(threading=True) but I do not see that same capability in Bottle. @ccc , yes requests looks great. Hopefully it's as simple to use at it appears. I have done no web programming. But I would like to create a very simple form app. Just want to do this via simple messaging with dicts. On the backend use SQLite. But I really want to keep it simple (KISS) because that's all I am capable of at the moment. I wrote a simple messaging protocol many years ago to talk with mainframes and mini's. It was text based and very simple. But still easy to extend and implement. We didn't get grief from the backend programmers. But because of knowledge of apple events at that time, it gave me the idea how to be extensible. But anyway, to begin with, using the dialogs module, I want to be able to easily create a form and post it to the backend which will write it to a SQLite db. Of course then to be able to retrieve info. But again, I need to keep it so simple. Otherwise I will be off learning all sorts of new stuff again. I think it should be possible. I also have a PythonAnywhere account, which includes bottle. So I have a place to host. Anyway, I will try 😇😭😀👍 @omz Are you deprecating the support for Flask in Pythonista v2.0 or will both Bottle and Flask be supported? I noticed that code completion for Falsk is not working in the latest beta. @ccc No, Flask support is not deprecated at all. I've actually added it in 2.0, it's just that the documentation is missing for now. Code completion should still work via Jedi, it's just a bit slower than it would be if it could use the documentation index. @ccc , given the simple job I am trying to do, do you think bottle is the best for the job. From what I read about mini or micro web frame works, bottle seems to be the simplest I updated with the missing line (t.start()), and this did work as written (sorry, this post didnt get posted) I am not exactly clear how to kill the server though-it survives global clears. Putting a KeyboardException inside the hello() works, but you could also just reuse the app() and just update route, etc. @JonB, thanks. It works like that in Pythonista. If I add some routes to the bottle backend, it does not work as expected. But if I run it again, it works. Some errors are generated, but the new routes are live. But of course it can be fine tuned. Getting late for me now. I will try running the script from the tools menu tomorrow. Also it does say hit control c to quit. I will try sending the ESC seq for control c tomorrow various ways. Maybe that's the answer The issue is there is not a good way to send a KeyboardInterrupt to a Thread. You might be able to use default_app() and just use bottle.app().routes=[] to clear out the old routes @JonB , ok I will take a look thanks. I was just thinking about the debugging environment for the control c, as it comes up in the console to stop the server type control c Works for flask but something similar might work for Bottle. I have tried the ways you guys suggested. I couldn't get the server to stop. For testing its ok though. I will just move on. I will use the thread version in Pythonista unless I have problems with it. If I have problems or it's funky, will move it to Editorial. But for testing both options are better than my expectations Lol, I still haven't got back to this yet. I jump down one rabbit hole to find myself recursively jumping down other rabbit holes 😱😭 But I just had a idea the other day. Just wanted to share it in case it's not that dumb or had another use than my idea usage. My idea was just passing files to Editorial via bottle. Since editorial can link/sync to Dropbox in app, could present some interesting sync possibilities. I know this can be done direct to Dropbox with with the API. There was just something appealing about this idea. Anyway, maybe be crap, just a thought. My sense would be that the best approach would be to use the Dropbox API on Pythonista and the builtin Dropbox capabilities of Editorial but to focus your efforts on simplicity and ease of setup/use. Slick and simple will beat convoluted almost every time.
https://forum.omz-software.com/topic/2451/running-bottle-in-pythonista-and-another-script-also
CC-MAIN-2017-47
refinedweb
1,232
75.5
Sometimes our lines and polygons are way too complicated for the purpose. Let’s say that we have a beautiful shape of Europe, and we want to make an interactive online map using that shape. Soon we’ll figure out that the polygon has too many points, it takes ages to load, it consumes a lot of memory and, in the end, we don’t even see the full detail. To make things easier, we decide to simplify my polygon. Simplification means that we want to express the same geometry, using fewer points, but trying to preserve the original shape as much as we can. The easiest way is to open QGIS and use its Simplify processing tool. Now we face the choice – which simplification method should we use? Douglas-Peucker or Visvalingam? How do they work? What is the difference? What does a “tolerance” mean? This short post aims to answer these questions. I’ll try to explain both of these most popular algorithms, so you can make proper decisions while using them. First let’s see both how algorithms simplify the following line. Douglas-Peucker Douglas-Peucker, or sometimes Ramer–Douglas–Peucker algorithm, is the better known of the two. Its main aim is to identify those points, which are less important for the overall shape of the line and remove them. It does not generate any new point. The algorithm accepts typically one parameter, tolerance, sometimes called epsilon. To explain how is epsilon used, it is the best to start with the principle. Douglas-Peucker is an iterative algorithm – it removes the point, splits the line and starts again until there is no point which could be removed. In the first step, it makes a line between the first and the last points of the line, as illustrated in the figure below. Then it identifies the point on the line, which is the furthest from this line connecting endpoints. If the distance between the line and the point is less than epsilon, the point is discarded, and the algorithm starts again until there is no point between endpoints. If the distance between the point and the line is larger than epsilon, the first and the furthest points are connected with another line and every point, which is closer than epsilon to this line gets discarded. Every time a new furthest point is identified, our original line splits in two and the algorithm continues on each part separately. The animation below shows the whole procedure of simplification of the line above using the Douglas-Peucker algorithm. Visvalingam-Whyatt Visvalingam-Whyatt shares the aim with Douglas-Peucker – identify points which could be removed. However, the principle is different. Tolerance, or epsilon, in this case, is an area, not a distance. Visvalingam-Whyatt, in the first step, generates triangles between points, as illustrated in the figure below. Then it identifies the smallest of these triangles and checks if its area is smaller or larger than the epsilon. If it is smaller, the point associated with the triangle gets discarded, and we start again – generate new triangles, identify the smallest one, check and repeat. The algorithm stops when all generated triangles are larger than the epsilon. See the whole simplification process below. A great explanation of Visvalingam-Whyatt algorithm with an interactive visualisation made Mike Bostock. Which one is better? You can see from the example above, that the final line is the same, but that is not always true, and both algorithms can result in different geometries. Visvalingam-Whyatt tends to produce nicer geometry and is often preferred for simplification of natural features. Douglas-Peucker tends to produce spiky lines at specific configurations. You can compare the actual behaviour of both at this great example by Michael Barry. Which one is faster? Let’s figure it out. I will use a long randomised line and Python package simplification, which implements both algorithms. The results may vary based on the actual implementation, but using the same package seems fair. I generate randomised line based on 5000 points and then simplify if using both algorithms with the epsilon fine-tuned to return a similar number of points. import numpy as np from simplification.cutil import ( simplify_coords, # this is Douglas-Peucker simplify_coords_vw, # this is Visvalingam-Whyatt ) # generate coords of 5000 ordered points as a line coords = np.sort(np.random.rand(5000, 2), axis=0) # how many coordinates returns DP with eps=0.01? simplify_coords(coords, .0025).shape # 30 / 5000 # how many coordinates returns VW with eps=0.001? simplify_coords_vw(coords, .0001).shape # 28 / 500 %%timeit simplify_coords(coords, .0025) %%timeit simplify_coords_vw(coords, .0001) And the winner is – Douglas-Peucker. By a significant margin. Douglas-Peucker: 74.1 µs ± 1.46 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) Visvalingam-Whyatt: 2.17 ms ± 23.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Douglas-Peucker is clearly more performant, but Visvalingam-Whyatt can produce nicer-looking geometry, pick the one you prefer. Percentage instead of epsilon Some implementations of simplification algorithms do not offer tolerance / epsilon parameter, but ask for a percentage. How many points do you want to keep? One example of this approach is mapshaper by Matthew Bloch. Based on the iterative nature of both, you can figure out how that works :). What about topology? It may happen, that the algorithm (any of them) returns invalid self-intersecting line. Be aware that it may happen. Some implementations (like GEOS used by Shapely and GeoPandas) provide optional slower version preserving topology, but some don’t, so be careful. I have gaps between my polygons If you are trying to simplify GeoDataFrame or shapefile, you may be surprised that the simplification makes gaps between the polygons where there should not be any. The reason for that is simple – the algorithm simplifies each polygon separately, so you will easily get something like this. If you want nice simplification which preserves topology between all polygons, like mapshaper does, look for TopoJSON. Without explaining how that works, as it deserves its own post, see the example below for yourself as the last bit of this text. import topojson as tp topo = tp.Topology(df, prequantize=False) topo.toposimplify(5).to_gdf() If there’s something inaccurate or confusing, let me know.
https://martinfleischmann.net/line-simplification-algorithms/
CC-MAIN-2022-27
refinedweb
1,052
57.87
I am trying to parse a .log file from MTurk in to a .csv file with rows and columns using Python. My data looks like: P:,14142,GREEN,800,9;R:,14597,7,y,NaN,Correct;P:,15605,#E5DC22,800,9;R:,16108,7,f,NaN,Correct;P:,17115,GREEN,100,9;R:,17548,7,y,NaN,Correct;P:,18552,#E5DC22,100,9;R:,18972,7,f,NaN,Correct;P:,19979,GREEN,800,9;R:,20379,7,y,NaN,Correct;P:,21387,#E5DC22,800,9;R:,21733,7,f,NaN,Correct;P:,22740,RED,100,9;R:,23139,7,y,NaN,False;P:,24147,BLUE,100,9;R:,24547,7,f,NaN,False;P:,25555,RED,800,9;R:,26043,7,b,NaN,Correct;P:,27051,BLUE,800,9; Currently, I have this, which puts everything in to columns: import pandas as pd from pandas import read_table log_file = '3BF51CHDTWYBE3LE8DZRA0R5AFGH0H.log' df = read_table(log_file, sep=';|,', header=None, engine='python') Like this: P|14142|GREEN|800|9|R|14597|7|y|NaN|Correct|P|15605|#E5DC22|800|9|R|16108 However, I cannot seem to be able to break this in to multiple rows, so that it would look more like this: P|14142|GREEN|800|9|R|14597|7|y|NaN|Correct| |P|15605|#E5DC22|800|9|R|16108 ie. where all the "P"s would be in one column, where all the colors would be in another, the "r"s, etc.. Any help is much appreciated. Thank you! You can use In [16]: df = pd.read_csv('log.txt', lineterminator=';', sep=':', header=None) to read the file (say, 'log.txt') assuming that the lines are terminated by ';', and the separators within lines are ':'. Unfortunately, your second column will now contain commas, which you'd like to logically separate. You can split the commas along the lines, and concatenate the result to the first column: In [17]: pd.concat([df[[0]], df[1].str.split(',').apply(pd.Series).iloc[:, 1: 6]], axis=1) Out[17]: 0 1 2 3 4 5 0 P 14142 GREEN 800 9 NaN 1 R 14597 7 y NaN Correct 2 P 15605 #E5DC22 800 9 NaN 3 R 16108 7 f NaN Correct 4 P 17115 GREEN 100 9 NaN 5 R 17548 7 y NaN Correct 6 P 18552 #E5DC22 100 9 NaN 7 R 18972 7 f NaN Correct 8 P 19979 GREEN 800 9 NaN 9 R 20379 7 y NaN Correct 10 P 21387 #E5DC22 800 9 NaN 11 R 21733 7 f NaN Correct 12 P 22740 RED 100 9 NaN 13 R 23139 7 y NaN False 14 P 24147 BLUE 100 9 NaN 15 R 24547 7 f NaN False 16 P 25555 RED 800 9 NaN 17 R 26043 7 b NaN Correct 18 P 27051 BLUE 800 9 NaN 19 \n\n NaN NaN NaN NaN NaN
https://codedump.io/share/GMiuS0nqxWu3/1/how-to-create-rows-and-columns-in-a-csv-file-from-log-file
CC-MAIN-2016-50
refinedweb
480
63.53
The Cloud Sphinx Theme documentation has moved to You will be redirected to the new location in 10 seconds. cloud_sptheme - helper functions¶ The main cloud_sptheme module contains the following helper functions: cloud_sptheme. get_theme_dir()¶ Returns path to directory containing this package’s Sphinx themes. Deprecated since version 1.7: As of Sphinx 1.2, this is passed to Sphinx via a setup.pyentry point, and no longer needs to be included in your documentation’s conf.py. cloud_sptheme. get_version(release)¶ Derive short version string from longer ‘release’ string. This is quick helper which takes a project’s releasestring, and generates the shortened versionstring required by conf.py. Usage example for conf.py: import cloud_sptheme as csp ... # The version info for the project you're documenting from myapp import __version__ as release version = csp.get_version(release)
https://pythonhosted.org/cloud_sptheme/lib/cloud_sptheme.html
CC-MAIN-2022-05
refinedweb
134
61.22
Barcode Software c# barcode image generation library Listin g 2-7. Using Pointers in Objective-C Make gs1 datamatrix barcode in Objective-C Listin g 2-7. Using Pointers private void btnNext_Click (object sender, System.EventArgs e) { SaveSettings(); if (_index < _album.Count - 1) { _index ++; ResetSettings(); SetOriginals(); } } barcode crystal reports 2005 generate, create bar code express none with .net projects BusinessRefinery.com/ bar code barcodeean implementaion example java use jvm bar code implementation to get bar code for java files BusinessRefinery.com/ barcodes In the Licensing Mode step you specify the licensing details. generate, create barcodes binary none for java projects BusinessRefinery.com/ bar code asp.net bar coder generate, create barcode tutorial none with .net projects BusinessRefinery.com/ bar code About the author using console asp .net to receive barcodes on asp.net web,windows application BusinessRefinery.com/ bar code vb.net codice genera barcode using store .net framework to build bar code in asp.net web,windows application BusinessRefinery.com/ bar code Using Commands That Produce a List qr bidimensional barcode data export for .net c# BusinessRefinery.com/QR Code 2d barcode use excel qr barcode generation to integrate qr for excel source BusinessRefinery.com/Quick Response Code You can see that the context pointer is cast to an object pointer, and messages are sent to that object. to use qr code jis x 0510 and qr code data, size, image with .net barcode sdk solomon BusinessRefinery.com/qr codes to print qr code and qr code iso/iec18004 data, size, image with .net barcode sdk easy BusinessRefinery.com/Quick Response Code When the first edition of this book was released, Silverlight 2 was just starting to gain adoption. It was a brand new technology from Microsoft (the managed code version was, anyway), one with strong competition. Though Silverlight 2 could have been used to build rich business applications, it didn t have the chops to be a strong contender in that space yet. Many of the features in this section are useful in applications of all sorts; I hate to classify them under the heading of business, but that s the largest consumer of these features. Validation, covered in chapter 13, (chapter 17). A good bit of the validation functionality rolled into the Silverlight runtime actually came from that project. WCF RIA Services provides a way to share validation and logic between the client and server as well as a framework for validation, data access, and security, shareable between Silverlight and other clients. WCF RIA Services builds upon the WCF stack, but it s not the only enhancement there. The Silverlight networking stack, described in chapter 14, was greatly enhanced to support in-browser and out-of-browser operation, as well as SOAP 1.2 and a number of new protocol enhancements. These changes make it easier to use Silverlight behind a firewall where the services often have different requirements than those on the Internet. Despite the promises of a paperless office, printing (covered in chapter 19) is still a staple of business applications everywhere. Printing in Silverlight is optimized for relatively short reports or documents, as well as for the equivalent of print-screen operations. It s super simple to use as easy as working with XAML on the pages. Finally, we come to a biggie: out-of-browser sandboxed and trusted applications. Covered in section 5.1, out-of-browser mode was one of the most significant enhancements made to how Silverlight operates. Silverlight 3 introduced the basic out-ofbrowser. qr code image dimensional for .net BusinessRefinery.com/QR winforms qr code use winforms qr codes integrating to incoporate qr code iso/iec18004 for .net express BusinessRefinery.com/Denso QR Bar Code Creating a New Playlist generate barcode 39 c# using picture vs .net to encode ansi/aim code 39 on asp.net web,windows application BusinessRefinery.com/USS Code 39 generate, create data matrix barcodes office none in word documents projects BusinessRefinery.com/Data Matrix barcode This section will deal with the way that PowerShell deals with errors that terminate the current flow of execution, also called terminating errors. Here we ll cover the language elements for dealing with terminating errors and how you can apply these features. You re probably familiar with terminating errors when they are called by their more conventional name exceptions. So call them what you will; we re going to delve into catching these terminating errors. In other words, how can you grab these errors and take corrective or remedial actions instead of simply giving up and quitting winforms data matrix generate, create barcode data matrix binary none with .net projects BusinessRefinery.com/gs1 datamatrix barcode use asp.net web forms data matrix barcodes generating to access datamatrix on .net custom BusinessRefinery.com/data matrix barcodes <Deployment xmlns="" xmlns: <Deployment.Parts> <AssemblyPart x: </Deployment.Parts> </Deployment> create pdf417 ssrs generate, create pdf417 remote none on .net projects BusinessRefinery.com/barcode pdf417 use excel spreadsheets pdf417 2d barcode generator to render pdf 417 for excel spreadsheets implements BusinessRefinery.com/PDF417 This method uses replaceObjectAtIndex:withObject: to remove the existing object from the collection and replace it with the new object. We don t have to do any explicit memory management with the tire, because NSMutableArray will automatically retain the new tire and release the object that lives at the index, whether it s an NSNull placeholder or a previously stored tire object. NSMutableArray will release all of its objects when it gets destroyed, so the tire will get cleaned up. The tireAtIndex: getter uses the objectAtIndex: method provided by NSArray to get the tire from the array: sql server code128c using barcode maker for reporting services control to generate, create barcode standards 128 image in reporting services applications. table BusinessRefinery.com/code 128 code set c vb net rdlc barcode 39 control use rdlc reports net barcode code39 integrated to incoporate 3 of 9 barcode in .net setting BusinessRefinery.com/Code 3/9 To view your contacts, click the Cloud icon then click the Contacts icon. CHAPTER 5 s RULE C R EATION package com.manning.blogapps.chapter16; import java.io.*; import org.apache.tools.ant.*; import com.manning.blogapps.chapter10.blogclient.*; public class PostBlogResourceTask extends BaseBlogTask { private String filename = null; private String contenttype = null; private String resourcename = null; private String urlproperty = "uploadurl"; The DataGrid class represents a control that displays a collection of data as a grid of rows and columns. The data displayed and the style in which it is presented is fully configurable. This class is part of the System.Windows.Forms namespace, and inherits from the Control class. See .NET Table 4.1 on page 104 for a list of members inherited by this class. AllowNavigation AlternatingBackColor Gets or sets whether navigation is permitted. Gets or sets the background color to use on every other row in the grid to create a ledgerlike appearance. Gets or sets the text to appear in the caption area. Gets or sets whether the caption is visible. Other properties related to this and other grid areas are also provided. Gets or sets a DataGridCell structure representing the cell in the grid that has the focus. Gets or sets the index of the selected row. Gets or sets which list in the assigned data source should be displayed in the grid. Gets or sets the source of data for the grid. Gets or sets the value of a cell. This property is the C# indexer for this class. Gets or sets whether the grid is in read-only mode. Gets or sets the width of row headers in pixels. Gets the collection of DataGridTableStyle objects specifying display styles for various tables that may be displayed by the grid. Attempts to begin an edit on the grid. Returns location information within the grid of a specified point on the screen. This works much like the HitTest method in the MonthCalendar class. Assigns the DataSource and DataMember properties to the given values at run time. Deselects a specified row. Occurs when the current cell has changed. Occurs when a new data source is assigned. Occurs when the user navigates to a new table. Occurs when the user scrolls the data grid. Figure 10-5. A conditional statement using a complex Boolean expression To make this situation a bit easier to swallow, what I usually do is start by assigning the result of the complex Boolean expression to a well-named variable. I can then use that variable in the conditional statement instead of the monstrous expression itself. Figure 10-6 shows an example of this. More Data Matrix on Objective Articles you may be interested barcode generator in vb.net 2005: Full-text searching a data grid in Java Encode Data Matrix ECC200 in Java Full-text searching a data grid qr code reader camera c#: UNDERSTANDING TYPES AND ASSEMBLIES in visual C#.net Generate gs1 datamatrix barcode in visual C#.net UNDERSTANDING TYPES AND ASSEMBLIES barcode scanner code in asp.net: Figure 18.10 Viewing a document in the XPS viewer in visual C# Draw PDF417 in visual C# Figure 18.10 Viewing a document in the XPS viewer barcode printer in vb.net: Controls, charts, and storage in Java Implementation QRCode in Java Controls, charts, and storage c# .net print barcode: 29: Troubleshooting in Objective-C Make QR in Objective-C 29: Troubleshooting c# barcode generator wpf: Action in vb Paint Quick Response Code in vb Action c# barcode reader api: Advanced query techniques in C# Integrated barcode 3 of 9 in C# Advanced query techniques c# generate barcode from string: 5: Stick Around: Building Data-Driven Applications with SQLite in Objective-C Print qr-codes in Objective-C 5: Stick Around: Building Data-Driven Applications with SQLite c# print barcode zebra: result in Objective-C Drawer QR in Objective-C result c# generate barcode image: FUNCTIONS AND SCRIPTS in .net C# Develop 39 barcode in .net C# FUNCTIONS AND SCRIPTS barcode reader project in c#.net: Enumeration Base Types in .net C# Make pdf417 in .net C# Enumeration Base Types c# reading barcode from image: INTEGRATING NHIBERNATE LOGGING in C# Display barcode code39 in C# INTEGRATING NHIBERNATE LOGGING c# generate barcode free: UNDERSTANDING TYPES AND ASSEMBLIES in c sharp Add Data Matrix ECC200 in c sharp UNDERSTANDING TYPES AND ASSEMBLIES how to integrate barcode scanner into java application: Result in C# Encoding datamatrix 2d barcode in C# Result zxing qr code generator c#: Auditing in visual C#.net Create qr codes in visual C#.net Auditing code 128 generator c#: Dimensional modeling in C#.net Connect code 128a in C#.net Dimensional modeling c# .net barcode generator free: AUTHOR S NOTE in c sharp Generating USS Code 39 in c sharp AUTHOR S NOTE c# barcode scan event: Writing POCOs in visual C# Develop Code 39 in visual C# Writing POCOs generate and print barcodes c#: GIVING COMMANDS in Objective-C Generating Quick Response Code in Objective-C GIVING COMMANDS how to generate barcode in c# net with example: My favorite DMVs, and why in c sharp Add barcode 128a in c sharp My favorite DMVs, and why
http://www.businessrefinery.com/yc1/55/12/
CC-MAIN-2022-05
refinedweb
1,857
57.37
From: Alexander Nasonov (alnsn-boost_at_[hidden]) Date: 2005-10-03 15:05:38 Daniel James wrote: > Alexander Nasonov wrote: >> 2. Adding new has_value may have side effects, for example, (a) hash<T> may >> compile for not customized type T although T doesn't necesarily have >> properly defined equality function; or (b) it may introduce overload >> resolution ambiguities, in worst case, hash<type_from_6_3_3_1> may stop >> working. > > I don't think this is nearly as bad as you think. Any type from TR1 will > only stop working if you declare hash_value in the boost or std > namespace - which you really shouldn't do. You are right. I was a way too pessimistic. > If a type has a hash_value function but no equality then boost::hash > will compile for it - but a hashed container won't, because it will > require the equality function. So the only time will boost::hash will > work is if it's used in another context where equality isn't required. That's not quite right. For example #include <iostream> #include <boost/functional/hash.hpp> struct Base { bool operator==(Base) const { return true; } }; std::size_t hash_value(Base) { return 0; // Base has no state } struct Derived : Base { int dummy; bool operator==(Derived other) const { return dummy == other.dummy; } }; int main() { Derived x; std::cout << boost::hash<Derived>()(x) << '\n'; } -- Alexander Nasonov Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2005/10/94708.php
CC-MAIN-2021-21
refinedweb
243
57.37
Assume that you have a List of string and you wish you to get the items of the list into a comma separated string , you will be able to it easily do with the help of string.join method. How to generate a comma seperated string from a list in C# ? Assume that the list contains three items Senthil , Kumar , CredAbility as shown below. IList<string> lst = new List<string>{"Senthil","Kumar","CredAbility"}; To get the comma seperated string of all the items in the list , use the string.Join as shown below. string commaseperatedstring = string.Join(",", lst); Here’s the full code snippet that is used in this example. using System; using System.Collections.Generic; namespace ConsoleApp1 { class Program { static void Main(string[] args) { IList<string> lst = new List<string> { "Senthil", "Kumar", "CredAbility" }; string commaseperatedstring = string.Join(",", lst); Console.WriteLine(commaseperatedstring); Console.ReadLine(); } } }
https://developerpublish.com/c-tips-and-tricks-13-create-comma-seperated-string-from-a-list/
CC-MAIN-2019-43
refinedweb
144
58.69
[Editor's note: Volume Shadow copies on Windows completely rock. They give administrative tools (and penetration testers) access to all kinds of wonderful things on Windows, including recently deleted files, files with a lock on them, and much more. They are almost like a nifty side channel into the guts of the Windows file system, making it ripe for exploration, research, and pwnage with a heavy does of plundering mixed in for good measure. Mark Baggett has been one of the main gents (along with Tim Tomes) leading the charge in researching how pen testers can use Volume Shadow copies. In this post, Mark shows how you can use Python to interact with Volume Shadow copies, so you can explore them in-depth. After describing how to list, create, and access such copies from Python, he provides a nifty Python script called vssown.py that does all the work for you. I'm hoping that the info Mark provides here inspires readers to start more aggressive analysis of Volume Shadow copies from their own interactive Python shells to reveal more cool pen test uses of Volume Shadow copies. -Ed.] Volume Shadow copies are immensely useful to penetration testers, often containing a treasure trove of valuable information. What if the domain administrator knows the penetration testers are coming, so he deletes "passwords.txt" from his desktop? No problem! The password file is sitting there waiting for you in the volume shadow copies. Files such as NTDS.DIT (the Active Directory Database) can be locked up by the operating system so you can't safely get to them. With Volume Shadow copies, no problemo! You can create a new volume shadow copy and grab the file from the copy and plunder it for hashes from anyone in the domain. If you are still not convinced, check out these posts that demonstrate the ever-loving-sweetness of Volume Shadow Copies.—the-los.html The bad news is that there are not a lot of great libraries out there that use Volume Shadow copies in Python. The good news is that you can use the win32com module and the Component Object Model (COM) to interface with the WIN32_ShadowCopy service and do everything you need to do. BUT (more bad news here) using COM within Python is a pain in the tushie. The interface to COM objects is less than intuitive, and not easily accessible. BUT (ahhh... a lil' more good news), I've done the hard part for you. If you are creating a Python-based executable for Windows, and you need it to access Volume Shadows copies, this article will make your life a bit easier. Let's start with the easy stuff. To get a list of the Volume Shadow copies on the system is pretty simple. You can take the code from vssown.vbs, lookup the corresponding python statement, and translate the code to Python. (VSSOWN.VBS is downloaded here:). After creating a scripting object and connecting to the server, you use ExecQuery to pull a list of all the Volume Shadow Copies on the system. Then you can use a for loop to step through each of the shadow copies and pull the "DeviceObject" attribute. The DeviceObject attribute is what you will use to access the data in the Volume Shadow Copies. Some excellent sample code with a list of all the attributes you can access is available here: Creating a Volume Shadow Copy is more difficult. To do so, we have to call the "Create" method of the Win32_ShadowCopy COM object and, as I mentioned, documentation on the API is somewhat lacking. Anytime I am trying to learn something new in Python (or much of my penetration testing, for that matter), I experiment with it in a Python Shell. The interactive shell lets you experiment with objects' methods and attributes and see what they are used for, all in real-time, interactively. First, we import the required module and create an object called "wmi" that points to a "Win32_ShadowCopy" COM object. This can be done with two simple lines of code. import win32com.client wmi=win32com.client.GetObject("winmgmts:\\.\root\cimv2:Win32_ShadowCopy") We want to call the "create" method for this Win32_ShadowCopy object. To execute a module, you use wmi object's ExecMethod() method. For example: "wmi.ExecMethod(<method name> , <com object parameters>)". The first argument is a string containing the target method's name. The second argument is a COM object that contains the target method's parameters. Setting those method parameters can be a bit confusing. Even knowing the correct parameters isn't straight forward. Here is how I did it. Remember, wmi points to a Win32_ShadowCopy object. First, I create an object called "createmethod" that points to the "Create" method on that object. Then, I create an object called "createparams" that points to the create methods parameters. After that, you can use the "createparams" object to examine the parameters and set their values. Here is an example: "CreateMethod" points to the "Create" method for the Win32_ShadowCopy object. "CreateParams" points to the input parameters for the Create method. Then, I use list comprehension to examine the names of each of the input parameters. The names of the parameters are stored in the .name attribute. Here you can see that Create method requires a parameter called "Context" and one called "Volume". "Context" is in the first position in the list, while "Volume" is in the second. Then I use list comprehension to examine the current values for each of these parameters. You can see that the "Context" is set to "ClientAccessible" and that the "Volume" is set to "None". If you examine the MSDN page for this object you will see that we have to specify a drive letter in the "Volume" parameter. I want the volume to be set to "C:\". So I set the value and check it again. Now I can call ExecMethod() and retrieve the result. The result will be in the "Properties_" attribute in the form of a list of COM Objects. The ".value" of the first item in the list is the response code. You can find a reference for all of the response codes here: If the response code is a zero, then the 2nd item in the list will contain the ID for the new Volume Shadow Copy. So my new shadow copy ID number is 4CDDDE4F-4B00-49FE-BF07-1D89B3325ADD. Let's check it with VSSADMIN to see if it was really created. To check which volume copies exist we type "vssadmin list shadows". At the bottom of the list we should see our new shadow copy. It is there! We created a Volume Shadow Copy! Note the matching Shadow Copy ID. Put all that together and we get the following function to create a volume shadow copy. Now, here is the best part. You don't have to do ANYTHING special to access data stored in the volume shadow copies. Dude... read that again. That is awesome. Thank you, Volume Shadow copies and Python! You can read from them just like you would any other file in the file system. NOTE: Shadow Copies are read only and cannot be written to with any utilities. So, if you want to see if a file exists you can do that using standard Python file-system-interaction techniques: If you want to open a file and read the file contents, you just treat it like any other file: As a matter of fact, the only two functions I really need to do just about anything I want with Volume Shadow copies are vss_list() and vss_create(). These functions can be easily adapted if you do need something else. If you just want someone to write a script for you that does all this, simply ask and ye shall receive. Well, you don't even have to ask. I already did it for ya. Here you go. Feel free to grab a copy of my vssown.py script, which does all the heavy lifting for you to list, create, and then interact with Volume Shadow copies, all within your comfy interactive Python shell or from within your own scripts. Just download that txt file and shave off the .txt suffix. Here is what it looks like when you run the vssown.py script. Remember, you need admin privs to access the Volume Shadow Copy Service. If you want to learn how to use techniques like this in your own programs, then check out my brand spanking new SANS course, SEC573 Python For Penetration Testers. Here are some upcoming dates for the class: -Mark Baggett Follow me on twitter: @MarkBaggett Posted June 21, 2013 at 9:49 AM | Permalink | Reply Jb Hi, I keep getting this error message with vssown.py -l : Traceback (most recent call last): File "", line 34, in File "", line 30, in main File "", line 16, in vss_list File "C:\\Users\\jb\\Desktop\\pyinstaller-pyinstaller-61571d6\\vssown\ uild\\vssown\\ out00-PYZ.pyz\\win32com.client.util", line 84, in next pywintypes.com_error: (-2147217388, ''OLE error 0x80041014', None, None) The vbs version is working properly. I have tried it on several windows 7 computers, same behavior (vbs version working, python version ko). Any idea on what might be going on ?
https://pen-testing.sans.org/blog/2013/04/12/using-volume-shadow-copies-from-python/
CC-MAIN-2019-18
refinedweb
1,547
72.97
More on Running WordPress on OpenShift More on Running WordPress on OpenShift Join the DZone community and get the full member experience.Join For Free See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity. OpenShift CLI installation First I will cover installation of CLI. For Windows you will need to install ruby 1.8.7 first, on Linux you will probably have it, so just skip first two steps: - You will need ruby, easiest way to install ruby on Windows is using rubyinstaller for Windows - During installation choose “add to path” option - After installation is done go to CMD/Shell and type “gem install rhc” - When rhc gem is installed run “rhc setup” to finish installation (you will need to provide OpenShift username and password, you can skip adding keys) After finishing last step you are done. One note regarding CLI is that basically some things like creating snapshots or creating aliases can’t be done through web, so CLI is only option. Actually CLI has any option that is available through web and a lot more. To see full list of options just type “rhc”. Creating snapshots and adding aliases Next thing you will need for your application is creating backup or snapshots. Through rhc it’s pretty simple, just run following command: rhc snapshot save appName This command sometimes don’t work for some reason, but in that case you will get which command you need to run through ssh. Login to your application OpenShift uses a public key to securely encrypt the connection between your local machine and your application and to authorize you to upload code. You must create a private and public key on your local machine and then upload the public key before you can connect to your applications’ Git repositories or remotely access your application. On Windows you can use puttygen to generate keys and on Linux you can use “ssh-keygen -t rsa” command. Then copy content of your public key here. Adding your domain name To add your own domain you need a domain of course. Next you need to add aliases to your application: rhc alias add appName domain.com rhc alias add appName After that go to your domain name registrar and add following: @ - URL redirect - www - CNAME - applicationname–namespace.rhcloud.com Then on your WordPress application (General settings) add this URL: (using just domain.com will make your app unavailable)So that is basically it for now, if you have any questions just post a comment and you will get the answer AS }}
https://dzone.com/articles/more-running-wordpress
CC-MAIN-2018-30
refinedweb
439
58.32
go to bug id or search bugs for Description: ------------ The function chunk_split doesn't work in 4.4.7. I know it works in 4.4.4. It just returns false. Reproduce code: --------------- $body = "ABCDEFGHIJK"; $chunken = 2; $end = ";"; var_dump( chunk_split($body,$chunken,$end)); Expected result: ---------------- return (string) AB;CD;EF;GH;IJ;K; Actual result: -------------- (bool) false Add a Patch Add a Pull Request Please try using this CVS snapshot: For Windows: Works perfectly fine here. No feedback was provided for this bug for over a week, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
https://bugs.php.net/bug.php?id=41826
CC-MAIN-2022-05
refinedweb
118
62.17
* Simple {@link LogNode} filter, removes everything except the message. 20 * Useful for situations like on-screen log output where you don't want a lot of metadata displayed, 21 * just easy-to-read message updates as they're happening. 22 */ 23 public class MessageOnlyLogFilter implements LogNode { 24 25 LogNode mNext; 26 27 /** 28 * Takes the "next" LogNode as a parameter, to simplify chaining. 29 * 30 * @param next The next LogNode in the pipeline. 31 */ 32 public MessageOnlyLogFilter(LogNode next) { 33 mNext = next; 34 } 35 36 public MessageOnlyLogFilter() { 37 } 38 39 @Override 40 public void println(int priority, String tag, String msg, Throwable tr) { 41 if (mNext != null) { 42 getNext().println(Log.NONE, null, msg, null); 43 } 44 } 45 46 /** 47 * Returns the next LogNode in the chain. 48 */ 49 public LogNode getNext() { 50 return mNext; 51 } 52 53 /** 54 * Sets the LogNode data will be sent to.. 55 */ 56 public void setNext(LogNode node) { 57 mNext = node; 58 } 59 60 } Except as noted, this content is licensed under Creative Commons Attribution 2.5. For details and restrictions, see the Content License. About Android Auto TV Wear Legal * Required Fields You have successfully signed up for the latest Android developer news and tips.
https://developer.android.com/samples/BasicAccessibility/src/com.example.android.common.logger/MessageOnlyLogFilter.html
CC-MAIN-2015-32
refinedweb
202
53.92
Fel structures and queries. - Felgo 2.15.0 Improvements - Qt 5.10 Improvements for Mobile App Development - Qt Creator 4.5 Support - How to Update Felgo Felgo 2.15.0 Improvements Felgo is now compatible with Qt 5.10 and Qt Creator 4.5. See later parts of this post for the highlights. Felgo Release 2.15.0 also includes many engine improvements: Patch for Qt 5.10 QtGraphicalEffects Issue on iOS Qt 5.10 is available for a while and offers many new features. But there were also some issues that prevented us from providing the update faster. For example, the release of Qt 5.10 introduced a QtGraphicalEffects issue on iOS. Thus, it is not possible to use graphical effects on iOS with Qt 5.10 at the moment. This is a major issue for all mobile developers, so we didn’t release Qt 5.10 for Felgo until we had a fix ready. We took the time to prepare a patch, which is part of this Felgo update now. It allows you to keep working with QtGraphicalEffects on iOS with Qt 5.10. You can always count on Felgo to ship stable Qt builds for mobile development. Note: Qt 5.10 also changes the minimum deployment target for iOS to iOS 10.0. Older versions are no longer supported by Qt. Use Firebase Queries and Data Structures The Firebase Plugin now supports working with complex data structures and queries. Apart from primitive data types, you can now read and write nested objects and arrays: FirebaseDatabase { id: firebaseDb Component.onCompleted: getValue("public/path/to/my/object") onReadCompleted: { if(success) { // parameter "value" can be a nested object/array, as read from your database console.debug("Read value " + value.subarray[3].subproperty.text) } } } This allows you to save and load full data records of your application with a single call from QML. The update also introduces support for queries, with the optional parameter queryProperties. You can use this feature to retrieve your data sorted or filtered by custom conditions: FirebaseDatabase { id: firebaseDb Component.onCompleted: { / }) } } The following example shows how to store data objects of a QML list with Firebase. The demo queries and displays only the twenty most recent entries of the list: import Felgo 3.0 import QtQuick 2.0 App { readonly property int maxListEntries: 20 property int numListEntries property bool loading: false FirebaseDatabase { id: firebaseDb //load total number of entries to compute index of next element realtimeValueKeys: ["public/numListEntries"] onRealtimeValueChanged: if(success && key === "numListEntries") numListEntries = value //load data on startup onPluginLoaded: loadData() onReadCompleted: { if(success) { //display DB data in ListView listView.model = value } } //refresh data directly from DB after saving onWriteCompleted: if(success) loadData() } Page { function loadData() { //load the last X entries, ordered by timestamp property firebaseDb.getValue("public/listEntries", { limitToLast: maxListEntries, orderByChild: "timestamp" }) } function addItem() { //add new DB item with timestamp and text var date = new Date() var timestamp = date.getTime() var text = "Item created at " + date.toString() var dbItem = { timestamp: timestamp, text: text } var index = numListEntries //using numbers as sub-keys makes the DB store data as an array firebaseDb.setValue("public/listEntries/" + index, dbItem) //keep track of total number of entries firebaseDb.setValue("public/numListEntries", numListEntries + 1) } } // App The update also brings more improvements and fixes for the Firebase Plugin. For example, you can access logged in user’s token or disable on-disk caching. To try the Firebase Plugin and test new features, please see the updated PluginDemo source on GitHub: New Felgo Live Client and Store App for iOS and Android Felgo 2.15.0 comes with a reworked Felgo Live Client and Server. The updated Live Client supports all new features and components. It also adds many smaller improvements and fixes. For example, you can now choose whether to show the Felgo Live Client window on top of other open windows. You can do this with a setting in the Live Server application: The updated Live Client also comes with improved QML-error handling and logging capabilities. It now shows error log output on the error screen within the client window. There’s also a new Felgo Live version available for you on the iOS App Store. It includes all the latest features, so make sure to get the updated app to be fully compatible with current version of Felgo Engine and the Live Server. You can download the app for iOS and Android here: Updated Felgo Sample Launcher with Qt World Summit Demo This release also updates the Felgo Sample Launcher that comes with the Felgo SDK. It offers quick access to all Felgo demos and examples. The Sample Launcher now also features the new Qt World Summit app demo: The demo shows many additions like the SocialView components or improved navigation features. The demo is also available for you on GitHub: More Improvements and Fixes Felgo 2.15.0 comes with many other improvements, like better icon support for PictureViewer and PullToRefreshHandler. It also includes fixes for Felgo Multiplayer, Navigation Components and SyncedStore. For an overview of all changes, make sure to see the changelog here. Qt 5.10 Improvements for Mobile App Development Imagine Style, Fusion Style and Improved Qt Quick Controls Qt 5.10 brings many additions for mobile app development, especially for Qt Quick Controls 2. Most prominent is the introduction of two new styles: Imagine style (left) and Fusion style (right): Imagine style is an image-based style. You can now import your assets from Photoshop, which makes working with designers a lot easier. Create a custom look and feel simply by replacing the used images of the style. Fusion style gives a platform agnostic, desktop-oriented look and feel. The update also introduces new types like Action, ActionGroup and MenuBar. It also adds many improvements for existing Quick Controls. For example, better icon support with new icon properties for AbstractButton. Note: The Felgo AppTabButton type can no longer provide an icon property, as this would interfere with the Qt additions. So if you use the AppTabButton component and the icon property, rename it to tabIcon to have the same functionality like before this update. There are also many minor changes, like the possibility to load ETC1 and ETC2 compressed textures, multisampling support for layers and some more properties to tune font handling. QML Support for Enum and InstanceOf Type Checks Qt 5.10 also brings some interesting language improvements for QML. First of all, it is now possible use enum properties in QML. Up until now, this was only possible with C++ types. // MyText.qml import QtQuick 2.0 Text { enum MyEnum { Normal, Heading } property int textType: MyText.MyEnum.Normal font.bold: textType == MyText.MyEnum.Heading font.pixelSize: textType == MyText.MyEnum.Heading ? 24 : 12 } // Main.qml import Felgo 3.0 App { MyText { textType: MyText.MyEnum.Heading text: "I'm a headline." } } Using enums for your types will help to better structure your QML code and make it more readable. Another improvement is the possibility to use instanceOf for checking QML types at runtime. import Felgo 3.0 import QtQuick 2.0 App { // two QML items, used for type checking Item { id: testItem } Rectangle { id: testRect } // function to check wheter an item is a Rectangle function isRectangle(item) { return item instanceof Rectangle } // type check example Component.onCompleted: { console.log("testItem is Rectangle? "+isRectangle(testItem)) console.log("testRect is Rectangle? "+isRectangle(testRect)) } } This comes in handy when you create QML types at runtime. It is also useful to check types when iterating the QML tree or working with child items of a container. Qt 5.10 Tech Preview: Pointer Handlers The next big feature for Qt Quick are the new pointer handlers. Pointer handlers will improve handling complex multi-touch scenarios in the future. Instead of using the Mouse- and TouchArea types, you can now attach handlers for different pointer events. import Felgo 3.0 import QtQuick 2.0 import Qt.labs.handlers 1.0 App { Rectangle { width: dp(100) height: dp(100) // handle touch or mouse-clicks TapHandler { onTapped: console.log("Item clicked") } // make the item rotate- and zoom-able with two-finger gestures PinchHandler { } } } Support for the pointer handler types is still in technology preview. So keep in mind that all types and properties will see bigger changes with the next Qt versions. Qt 3D Improvements and Qt 3D Studio The new Qt version extends the Qt 3D capabilities with many features. For example, support for Shader graphs and the technology preview of a skeletal animation system. There’s also a new Scene2D type, which simplifies embedding Qt 3D content in a Qt Quick Scene. It covers most of the main required features then. The focus for Qt 3D is thus moving to improve performance and memory consumption. Qt 3D Studio is a graphical editor used to create 3D user interfaces. It consists of both a runtime component that is run in your application and a graphical design tool to design and create the UI. While the tool is a standalone application, the runtime can easily be integrated with the rest of Qt. The runtime and the 3D Studio application are both available under commercial and GPL licensing. For more information about Qt 3D Studio, please have a look at the this post. Place Custom Shaped Items in Your Scene Qt Quick also gained a plugin that allows placing custom shaped items into the scene. You no longer require to work with QQuickPaintedItem or the Canvas type to create custom shapes. You can now use the new Shape Quick Item, which is backed by either actual geometry or a vendor-specific GPU accelerated path rendering approach. The main benefits are: - There is no rasterization involved. This allows shapes that span a large area of high-resolution screens and good-looking animations. - The API is fully declarative and all properties can be bound to in QML expressions, or animated using the usual tools of Qt Quick. - There are multiple rendering implementations under the hood. For example, a different renderer for NVIDIA GPUs takes advantages of certain OpenGL extensions. Other optimized renderers can thus also be added in the future. Have a look at the blog post and documentation for more details. Qt 5.10 also adds many other improvements and fixes for e.g. Qt Core, Qt Widgets or Embedded Systems. Also note that with Qt 5.10, the Qt Networking support for OAuth1 & 2 and Qt Speech features matured into a stable version. They are no longer in tech preview now. See the official release blog for more information. A list of all new features in Qt 5.10 can be found here. Qt Creator 4.5 Support The new version includes several UI improvements and better support of platform-specific tools. One of the bigger UI changes is the new File System Navigation Tree. New File System Navigation Tree The File System navigation pane is a lot more useful now. It shows a file system tree, where you can select the root directory from a list of useful folders: - The “computer” root - Your home directory - Your default projects directory - The base directories of all the projects you have open in Qt Creator Improved IDE Support for Android The package manager within Android SDK is no longer available since Android build tools version 25.3.0. There is no UI tool available to manage the installed packages without Android Studio. Because of that, Qt Creator now comes with an own package manager for Android. Note: The command line tool of Android SDK cannot update packages on Windows, and fails with JDK 9. This applies to Qt Creator as well, as it depends on the tool. The package manager UI for Android is thus not usable on Windows until a new Android build tools version fixes these issues. Qt Creator now also provides better information about problems with the installed SDK. For example for missing components or unmet version requirements. More Additions Qt Creator 4.5 also comes with better support of latest platform tools for iOS. It fixes the mechanism for switching between iOS Simulator device types with XCode 9. For Windows, the detection of Visual Studio Build Tools 2017 is also fixed now. For more information about Qt Creator 4.5 see the official release blog. a full list of improvements and fixes to Felgo in this update, please check out the change log! More Posts Like This Release 2.14.2: Live Code Reloading with Native Cross-Platform Plugins Release 2.14.1: Update to Qt 5.9.3 | Use Live Code Reloading on macOS and Linux Release 2.14.0: Live Code Reloading for Desktop, iOS & Android How to Make Cross-Platform Mobile Apps with Qt – Felgo Apps
https://blog.felgo.com/updates/felgo-2-15-0-qt-5-10-qt-creator-4-5-support-firebase-data-structures-and-queries
CC-MAIN-2022-33
refinedweb
2,116
58.18
servlets servlets why we require wrappers in servlets? what are its uses.../response. For e.g. compression, encryption, XSLT etc. Here is an example: servlets ; Please visit the following links: Logging Filter Servlet Example Response Filter Servlet Servlets Programming Servlets Programming Hi this is tanu, This is a code for knowing...; import javax.servlet.http.HttpServletResponse; //In this example we are going... visit the following links: servlets servlets what are advantages of servlets what are advantages of servlets Please visit the following link: Advantages Of Servlets Can you give me some good factory pattern examples? Can you give me some good factory pattern examples? HI, I am..., Here are some links that you will find helpful, they have good factory pattern example and will help you learn it. Factory Pattern Design Pattern - jsp -servlets jsp -servlets i have servlets s1 in this servlets i have created emplooyee object, other servlets is s2, then how can we find employee information in s2 serv
http://roseindia.net/tutorialhelp/comment/82158
CC-MAIN-2014-15
refinedweb
162
64
#include <stdlib.h> char *getpass(const char *prompt); char *getpassphrase(const char *prompt); #include <unistd.h> char *getpass(const char *prompt); The getpass() function opens the process's controlling terminal, writes to that device the null-terminated string prompt, disables echoing, reads a string of characters up to the next newline character or EOF, restores the terminal state and closes the terminal. The getpassphrase() function is identical to getpass(), except that it reads and returns a string of up to 257 characters in length. Upon successful completion, getpass() returns a pointer to a null-terminated string of at most 9 bytes that were read from the terminal device. If an error is encountered, the terminal state is restored and a null pointer is returned. The getpass() and getpassphrase() functions may fail if: The function was interrupted by a signal. The process is a member of a background process attempting to read from its controlling terminal, the process is ignoring or blocking the SIGTTIN signal or the process group is orphaned. OPEN_MAX file descriptors are currently open in the calling process. The maximum allowable number of files is currently open in the system. The process does not have a controlling terminal. The return value points to static data whose content may be overwritten by each call. See attributes(5) for descriptions of the following attributes: attributes(5), standards(5)
https://docs.oracle.com/cd/E36784_01/html/E36874/getpass-3c.html
CC-MAIN-2019-30
refinedweb
228
53.1
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo Hi everyone, Ok, looks like Jai fixed the problem. > Simply i changed the java file name AL to ATest : Now it is working.I > don't know what is the exact problem. Use a revision control system. No, seriously, it's a good idea to do so. It makes it trivial to compare what you have now with what you had then. There are several freely available systems, including CVS: I'm also guessing that you're trying to compile your classes from scratch? Did you remember to call 'javac' as you were working with the original code? It's all too easy, in languages like Java, to forget to keep the classes in sync with the source files. (We don't have this problem in Python/Jython, since each bytecode file is compiled on the fly, but we do need to worry about it in Java.) Do you have a Makefile or Apache-Ant script to automate your class compilation? Anyway, at least it's working now. Glad to hear it! ---------- Forwarded message ---------- Date: Wed, 18 Feb 2004 10:16:37 +0530 From: jai <mjn@...> To: Danny Yoo <dyoo@...> Subject: Hi Hi This is my orginal code // This is java file import java.util.*; public class AL { public ArrayList disp(int i) { ArrayList a=new ArrayList(); a.add(new String("hai")); a.add(new Integer(10)); System.out.print(a); return a; } public static void main(String ar[]) { } }// end of java file Jython file: import AL import unittest from java.util import ArrayList class jython_AL(unittest.TestCase): def setUp(self): self.obj = AL() self.obj1= ArrayList() def testNormal(self): self.obj1=self.obj.disp(1) self.assertEqual(self.obj.disp(1),self.obj1) suite = unittest.TestSuite() suite.addTest(unittest.makeSuite(jython_AL)) unittest.TextTestRunner(verbosity=2).run(suite) Simply i changed the java file name AL to ATest : Now it is working.I don't know what is the exact problem. Thanks for ur mail. Regards jai
http://sourceforge.net/p/jython/mailman/message/11763184/
CC-MAIN-2015-22
refinedweb
347
69.79
A module is a collection of packages that are released, versioned, and distributed together.". A module path is the canonical name for a module, declared with the module directive in the module‘s go.mod file. A module’s path is the prefix for package paths within the module. A module path should describe both what the module does and where to find it. Typically, a module path consists of a repository root path, a subdirectory within the repository (usually empty), and a major version suffix (for major version 2 or higher). golang.org/x/net. See Custom import paths for details on how paths are resolved to repositories. golang.org/x/tools/goplsis in the /goplssubdirectory of the repository golang.org/x/tools. . v1.2.3-precomes before v1.2.3.‘t two or more modules provide the package, an error is reported. If no modules provide the package, the go command will attempt to find a new module (unless the flags -mod=readonly or -mod=vendor are used, in which case, an error is reported).:(in parallel): golang.org/x/net/html golang.org/x/net golang.org/x golang.org, if all requests to failed with 404 or 410: golang.org/x/net/html golang.org/x/net golang.org/x golang.org After a suitable module has been found, the go command will add a new requirement with the new module‘sfiles. Punctuation tokens include (, ), and =>. Keywords distinguish different kinds of directives in a go.mod file. Allowed keywords are module, go, require, replace, and exclude.. Most identifiers and strings in a go.mod file are either module paths or versions. A module path must satisfy the following requirements: /, U+002F). It must not begin or end with a slash. +, -, ., _, and ~). ., U+002E). CON, com1, NuL, and so on). If the module path appears in a require directive and is not replaced, or if the module paths appears on the right side of a replace directive, the go command may need to download modules with that path, and some additional requirements must be satisfied. ., U+002E), and dashes ( -, U+002D); it must contain at least one dot and cannot start with a dash. /vNwhere Nlooks numeric (ASCII digits and dots), Nmust not begin with a leading zero, must not be /v1, and must not contain any dots. gopkg.in/, this requirement is replaced by a requirement that the path follow the gopkg.in service's conventions. verison (as in require, replace, and exclude directives), the final path element must be consistent with the version. See Major version suffixes. go.mod syntax is specified below using Extended Backus-Naur Form (EBNF). See the Notation section in the Go Language Specificiation for details on EBNF syntax. GoMod = { Directive } . Directive = ModuleDirective | GoDirective | RequireDirective | ExcludeDirective | ReplaceDirective . Newlines, identifiers, and strings are denoted with newline, ident, and string, respectively. Module paths and versions are denoted with ModulePath and Version. ModulePath = ident | string . /* see restrictions above */ Version = ident | string . /* see restrictions above */ moduledirective A module directive defines the main module's path. A go.mod file must contain exactly one module directive. ModuleDirective = "module" ( ModulePath | "(" newline ModulePath newline ")" newline . Example: module golang.org/x/net godirective A go directive sets the expected language version for the module. The version must be a valid Go release version: a positive integer followed by a dot and a non-negative integer (for example, 1.9, 1.14). The language version determines which language features are available when compiling packages in the module. Language features present in that version will be available for use. Language features removed in earlier versions, or added in later versions, will not be available. The language version does not affect build tags, which are determined by the Go release being used. The language version is also used to enable features in the go command. For example, automatic vendoring may be enabled with a go version of 1.14 or higher. A go.mod file may contain at most one go directive. Most commands will add a go directive with the current Go version if one is not present. GoDirective = "go" GoVersion newline . GoVersion = string | ident . /* valid release version; see above */ Example: go 1.14 requiredirective ) excludedirective An exclude directive prevents a module version from being loaded by the go command. If an excluded version is referenced by a require directive in a go.mod file, the go command will list available versions for the module (as shown with go list -m -versions) and will load the next higher non-excluded version instead. Both release and pre-release versions are considered for this purpose, but pseudo-versions are not. If there are no higher versions, the go command will report an error. Note that this may change in Go 1.15. exclude directives only apply in the main module's go.mod file and are ignored in other modules. See Minimal version selection for details. ExcludeDirective = "exclude" ( ExcludeSpec | "(" newline { ExcludeSpec } ")" ) . ExcludeSpec = ModulePath Version newline . Example: exclude golang.org/x/net v1.2.3 exclude ( golang.org/x/crypto v1.4.5 golang.org/x/text v1.6.7 ) replacedirective ) The go command automatically updates go.mod when it uses the module graph if some information is missing or go.mod doesn't accurately reflect reality. For example, consider this go.mod file: module example.com/M require ( example.com/A v1 example.com/B v1.0.0 example.com/C v1.0.0 example.com/D v1.2.3 example.com/E dev ) exclude example.com/D v1.2.3 The update and therefore update go.mod, including go build, go get, go install, go list, go test, go mod graph, go mod tidy, and go mod why. The -mod=readonly flag prevents commands from automatically updating go.mod. However, if a command needs to perform an action that would update to go.mod, it will report an error. For example, if go build is asked to build a package not provided by any module in the build list, go build will report an error instead of looking up the module and updating requirements in go.mod.. Module-aware mode is active by default whenever a go.mod file is found in the current directory or in any parent directory. For more fine-grained control, the GO111MODULE environment variable may be set to one of three values: on, off, or auto. GO111MODULE=off, the gocommand ignores go.modfiles and runs in GOPATHmode. GO111MODULE=on, the gocommand runs in module-aware mode, even when no go.modfile is present. Not all commands work without a go.modfile: see Module commands outside a module. GO111MODULE=autoor is unset, the gocommand runs in module-aware mode if a go.modfile is present in the current directory or any parent directory (the default behavior). In module-aware mode, GOPATH no longer defines the meaning of imports during a build, but it still stores downloaded dependencies (in GOPATH/pkg/mod; see Module cache) and installed commands (in GOPATH/bin, unless GOBIN is set). go get go list -m Usage: go list -m [-u] [), module‘s Update field to information about the newer module. The module’s String method indicates an available upgrade by formatting the newer version in brackets after the current version. For example, go list -m -u all might print: example.com/main/module module go mod verify go clean -modcache GOPROXYprotocol is a comma-separated list of URLs or the keywords direct or off (see Environment variables for details). When the go command receives a 404 or 410 response from a proxy, it falls back to later proxies in the list. The go command does not fall back to later proxies in response to other 4xx and 5xx errors. This allows a proxy to act as a gatekeeper, for example, by responding with error 403 (Forbidden) for modules not on an approved list.. When deciding whether to trust the source code for a module version just fetched from a proxy or origin server, the go command first consults the go.sum lines in the go.sum file of the current module. If the go.sum file does not contain an entry for that module version, then it may consult the‘t brakets,‘s. go.mod file: The file that defines a module's path, requirements, and other metadata. Appears in the module's root directory. See the section on go.mod files. import path: A string used to import a package in a Go source file. Synonymous with package path. main module: The module in which the go command is invoked.. version: An identifier for an immutable snapshot of a module, written as the letter v followed by a semantic version. See the section on Versions.
https://go.googlesource.com/website/+/d608874d62b35d3e7c151c42382d3450b40f4ff0/content/static/doc/modules.md
CC-MAIN-2020-50
refinedweb
1,463
59.6
The java.util.jar.JarFile class represents a file in the JAR format. It is a subclass of java.util.zip.ZipFile, and JarFile objects are almost exactly like ZipFile objects. public class JarFile extends ZipFile The JarFile class has five constructors: public JarFile(String filename) throws IOException public JarFile(String filename, boolean verify) throws IOException public JarFile(String filename, boolean verify, int mode) throws IOException public JarFile(File file) throws IOException public JarFile(File file, boolean verify) throws IOException The first argument specifies the file to read, either by name or with a java.io.File object. The optional second argument, verify, is important only for signed JAR files. If verify is true, signatures will be checked against the file's contents; if verify is false, signatures will not be checked against the file's contents. The default is to check signatures. An IOException is thrown if an entry does not match its signature. The optional third argument, mode, should be one of the named constants ZipFile.OPEN_READ or ZipFile.OPEN_DELETE, to indicate whether the file is opened in read-only or read-and-delete mode. JAR files cannot be opened for writing. The JarFile class is so similar in interface and behavior to java.util.zip.ZipFile that I can spare you a lot of details about most of its methods. It declares only the following five methods (though of course you shouldn't forget about the others it inherits from its superclass): public ZipEntry getEntry(String name) public Enumeration entries( ) public InputStream getInputStream(ZipEntry ze) throws IOException public JarEntry getJarEntry(String name) public Manifest getManifest( ) throws IOException getEntry( ), enTRies( ), and getInputStream( ) are used exactly as they are for zip files. getJarEntry( ) is used almost exactly like getEntry( ), except that it's declared to return an instance of JarEntry, a subclass of ZipEntry. Some extra work takes place in these methods to read the manifest file and verify signatures, but unless the signatures don't verify (in which case an IOException is thrown), none of this is relevant to the client programmer. The one really interesting new method in this list is getManifest( ), which returns an instance of the java.util.jar.Manifest class. You can use this to read the entries in the manifest file, as described in the section on the Manifest class later in this chapter.
https://flylib.com/books/en/1.134.1/jarfile.html
CC-MAIN-2018-51
refinedweb
391
52.29
- NAME - SYNOPSIS - DESCRIPTION - EXAMPLE - MOCK OBJECT METHODS - SUB OBJECT METHODS - NOTES - AUTHOR - BUGS - REPOSITORY - BUILD RESULTS - SUPPORT - ACKNOWLEDGEMENTS - LICENSE AND COPYRIGHT NAME Mock::Sub - Mock package, object and standard subroutines, with unit testing in mind. SYNOPSIS # see EXAMPLES for a full use case and caveats use Mock::Sub; # disable warnings about mocking non-existent subs use Mock::Sub no_warnings => 1 # create the parent mock object my $mock = Mock::Sub->new; # mock some subs... my $foo = $mock->mock('Package::foo'); my $bar = $mock->mock('Package::bar'); # wait until a mocked sub is called Package::foo(); # then... $foo->name; # name of sub that's mocked $foo->called; # was the sub called? $foo->called_count; # how many times was it called? $foo->called_with; # array of params sent to sub # have the mocked sub return something when it's called (list or scalar). $foo->return_value(1, 2, {a => 1}); my @return = Package::foo; # have the mocked sub perform an action $foo->side_effect( sub { die "eval catch" if @_; } ); eval { Package::foo(1); }; like ($@, qr/eval catch/, "side_effect worked with params"); # extract the parameters the sub was called with (if return_value or # side_effect is not used, we will return the parameters that were sent into # the mocked sub (list or scalar context) my @args = $foo->called_with; # reset the mock object for re-use within the same scope $foo->reset; # restore original functionality to the sub $foo->unmock; # re-mock a previously unmock()ed sub $foo->remock; # check if a sub is mocked my $state = $foo->mocked_state; # mock out a CORE:: function. Be warned that this *must* be done within # compile stage (BEGIN), and the function can NOT be unmocked prior # to the completion of program execution my ($mock, $caller); BEGIN { $mock = Mock::Sub->new; $caller = $mock->mock('caller'); }; $caller->return_value(55); caller(); # mocked caller() called DESCRIPTION Easy to use and very lightweight module for mocking out sub calls. Very useful for testing areas of your own modules where getting coverage may be difficult due to nothing to test against, and/or to reduce test run time by eliminating the need to call subs that you really don't want or need to test. EXAMPLE Here's a full example to get further coverage where it's difficult if not impossible to test certain areas of your code (eg: you have if/else statements, but they don't do anything but call other subs. You don't want to test the subs that are called, nor do you want to add statements to your code). Note that if the end subroutine you're testing is NOT Object Oriented (and you're importing them into your module that you're testing), you have to mock them as part of your own namespace (ie. instead of Other::first, you'd mock MyModule::first). # module you're testing: package MyPackage; use Other; use Exporter qw(import); @EXPORT_OK = qw(test); my $other = Other->new; sub test { my $arg = shift; if ($arg == 1){ # how do you test this?... there's no return etc. $other->first(); } if ($arg == 2){ $other->second(); } } # your test file use MyPackage qw(test); use Mock::Sub; use Test::More tests => 2; my $mock = Mock::Sub->new; my $first = $mock->mock('Other::first'); my $second = $mock->mock('Other::second'); # coverage for first if() in MyPackage::test test(1); is ($first->called, 1, "1st if() statement covered"); # coverage for second if() test(2); is ($second->called, 1, "2nd if() statement covered"); MOCK OBJECT METHODS new(%opts) Instantiates and returns a new Mock::Sub object, ready to be used to start creating mocked sub objects. Optional options: return_value => $scalar Set this to have all mocked subs created with this mock object return anything you wish (accepts a single scalar only. See return_value()method to return a list and for further information). You can also set it in individual mocks only (see return_value()method). side_effect => $cref Set this in new()to have the side effect passed into all child mocks created with this object. See side_effect()method. mock('sub', %opts) Instantiates and returns a new mock object on each call. 'sub' is the name of the subroutine to mock (requires full package name if the sub isn't in main::). The mocked sub will return the parameters sent into the mocked sub if a return value isn't set, or a side effect doesn't return anything, if available. If in scalar context but a list was sent in, we'll return the first parameter in the list. In list context, we simply receive the parameters as they were sent in. Optional parameters: See new() for a description of the parameters. Both the return_value and side_effect parameters can be set in this method to individualize each mock object, and will override the global configuration if set in new(). There's also return_value() and side_effect() methods if you want to set, change or remove these values after instantiation of a child sub object. mocked_subs Returns a list of all the names of the subs that are currently mocked under the parent mock object. mocked_objects Returns a list of all sub objects underneath the parent mock object, regardless if its sub is currently mocked or not. mocked_state('Sub::Name') Returns 1 if the sub currently under the parent mock object is mocked or not, and 0 if not. Croaks if there hasn't been a child sub object created with this sub name. SUB OBJECT METHODS These methods are for the children mocked sub objects returned from the parent mock object. See "MOCK OBJECT METHODS" for methods related to the parent mock object. unmock Restores the original functionality back to the sub, and runs reset() on the object. remock Re-mocks the sub within the object after calling unmock on it (accepts the side_effect and return_value parameters). called Returns true (1) if the sub being mocked has been called, and false (0) if not. called_count Returns the number of times the mocked sub has been called. called_with Returns an array of the parameters sent to the subroutine. confess()s if we're called before the mocked sub has been called. mocked_state Returns true (1) if the sub the object refers to is currently mocked, and false (0) if not. name Returns the name of the sub being mocked. side_effect($cref) Add (or change/delete) a side effect after instantiation. Send in a code reference containing an action you'd like the mocked sub to perform. The side effect function will receive all parameters sent into the mocked sub. You can use both side_effect() and return_value() params at the same time. side_effect will be run first, and then return_value. Note that if side_effect's last expression evaluates to any value whatsoever (even false), it will return that and return_value will be skipped. To work around this and have the side_effect run but still get the return_value thereafter, write your cref to evaluate undef as the last thing it does: sub { ...; undef; }. return_value Add (or change/delete) the mocked sub's return value after instantiation. Can be a scalar or list. Send in undef to remove previously set values. reset Resets the functional parameters ( return_value, side_effect), along with called() and called_count() back to undef/false. Does not restore the sub back to its original state. NOTES This module has a backwards parent-child relationship. To use, you create a mock object using "MOCK OBJECT METHODS" new and mock methods, thereafter, you use the returned mocked sub object "SUB OBJECT METHODS" to perform the work. The parent mock object retains certain information and statistics of the child mocked objects (and the subs themselves). To mock CORE::GLOBAL functions, you *must* initiate within a BEGIN block (see SYNOPSIS for details). It is important that if you mock a CORE sub, it can't and won't be returned to its original state until after the entire program process tree exists. Period. I didn't make this a Test:: module (although it started that way) because I can see more uses than placing it into that category. AUTHOR Steve Bertrand, <steveb at cpan.org> BUGS Please report any bugs or requests at REPOSITORY BUILD RESULTS CPAN Testers: SUPPORT You can find documentation for this module with the perldoc command. perldoc Mock::Sub ACKNOWLEDGEMENTS Python's MagicMock module. LICENSE AND COPYRIGHT This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License. See for more information.
https://metacpan.org/pod/Mock%3A%3ASub
CC-MAIN-2018-09
refinedweb
1,419
59.53
On Fri, Oct 2, 2009 at 12:38 PM, Francesco Abbate <francesco.bbt@gmail.com> wrote: > The wiki could be a good idea but I don't know how to set it up. Otherwise I > still think that interested people can do a little effort and commit to the > SVN repository a properly formatted .txt files. It is very easy for everyone > to get in and it will assure an higher level of quality for the > documentation. The doc source seems to be easy to edit. It's easy to build from source, but (as I said) you have to use the Lua includes packaged in the contained Lua source distribution, because of the LNUM patch. A few first impressions: 1) It would be cool if lgsl were available as a regular Lua module, but it does require a LNUM-complex patched Lua. Perhaps if it used something like lhf's lcomplex, it could be more generally used? 2) The contents of the math table are all made global (there is in fact no 'math' table anymore). OK, I can see the temptation, but most Lua users are now used to bringing in math functions they need from math. This will also bite you if you decide to override functions like exp() to work on vectors/matrices. (A fast map function would be useful, otherwise) 3) A general remark about namespaces; everything is in the global namespace, except when it isn't (like fft). Again, it is probably good practice to put global things like new() into a table like lgsl 4) Matrices are indexed beginning at zero, which was a little suprise! I know there are fierce wars about this, but having two conventions hanging around is not good. steve d.
http://lua-users.org/lists/lua-l/2009-10/msg00032.html
CC-MAIN-2019-39
refinedweb
292
67.99
OK, now that it’s just us API fanatics, imagine you need to call a stream read in an interface method implementation, but the interface doesn’t declare an I/O exception. For concreteness, suppose we have an interface: interface Foo { public void bar(); } and we want to implement a Foo that calls streaming I/O, as in: public class IOFoo implements Foo { public void bar() { InputStream in = ...; ... int numBytes = in.read(buf); // throws IOException } } A common example is the Java interface Runnable, which has a run() method which does not throw exceptions. Given that this is the basis for threads, the issue comes up all the time. This won’t compile, because the call to the read method throws an I/O exception, which is a checked exception that is not declared on the run method. In fact, it can’t be declared on the run method, because the interface doesn’t declare an I/O exception. What I too often see is this broken workaround: int numBytes = -1; try { numBytes = in.read(); } catch (IOException e) { throw new RuntimeException(e); } What’s wrong here? There’s no way for anyone calling the bar() method to recover the exception and handle it. You could document that it throws a RuntimeException, but all kinds of problems can lead to a runtime exception being thrown, and there’s no way to tell which threw it. (Yes, you could encode it in the message, but that’s an even worse idea, as it’s very hard to maintain and code against.) An alternative approach is to follow the pattern used by Java’s PrintWriter class and buffer the exception then make it available to the caller. Here’s a simplified example that buffers a single exception: public class IOFoo implements Foo { private IOException mException; public void bar() { InputStream in = ...; ... try { in.read(); // throws IOException } catch (IOException e) { mException = e; return; } } public IOException getIOException() { return mException; } } That way, a caller can do this: IOFoo f = new IOFoo(...); f.run(); if (f.getIoException() != null) throw (f.getIOException()); Now the context around the call to IOFoo’s run can catch the IOException. For instance, if this is in a method call, you can just declare the method to throw the checked IOException. Note that we need to declare f as an IOFoo, not just a Foo, because we call its getIOException method. Java’s PrintWriter keeps a list of exceptions that have been raised in calls to its methods. That’s because it keeps going after exceptions, assuming the client would rather they be silently ignored. I don’t think that’s such a great idea, for the obvious reason that there’s no telling what’s going to happen when I/O exceptions are just ignored. It would be straightforward to set a flag that stops any further action once an exception has been raised. The case I was actually using this in was for was writing a static method to serialize a tokenized language model. Visiting the n-grams requires an IntArrayHandler whose handle method is not declared to throw any checked exceptions. I wanted any embedded exception to be buffered and thrown by the top-level static method. August 19, 2011 at 3:12 pm | This is why I hate Interfaces. The only thing they can do is break.
http://lingpipe-blog.com/2011/08/16/buffering-exceptions/
CC-MAIN-2014-42
refinedweb
556
70.13
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. How to hide Edit button for specific user in form issue?It's Urgent..Somebody Help me? I want to hide edit button in form issue for specific user. In this case , I want user only can create issue but can't edit any recorded issue. Only admin and project manager that able to Edit. Note : Sorry for asking same question, because this problem isn't solved yet since 4 days ago.. I've seen this question for a few days I though that you already have a solution. I don't have so much time but this is going to help you. I hope that you know how to code. def write(self, cr, uid, ids, datas = {}, context = {} ): if uid not in self.pool.get('res.users').search(cr, uid, [('login','in',['admin','grover'])]): return super(class_name, self).write(cr, uid, ids, datas, context) else: raise osv.except_osv('Error','You are not allowed to make changes!') Right now admin and grover can't make changes. You can have 1 or more users on that list ['username1','username2','username3'] It's the easiest way. there are other ways to check groups but as I told you I don't have so much time right now. Oh. this you have to do it inside your module, or if it's a native module inside the inherited module. By the way sorry for I asked this question in this 4 days , because it's true..I'm not solve this problem yet.. I am a beginner in openerp..If you don't mind to help me again,. can you tell me how to code in function "if uid .....#check if user belongs to a certain group" above..in this case project user can't edit form issue..thank you for helping me I hope that can help you. Big thanks Mr.Grover..hmm I copy your code and follow your instruction above..but it become can't connect to open erp..it's written "No Handler Found"..sorry for distrub you again..if you don't mind would you help me again?which file that I have to code?I already do in file project_issue.py Thank you very much mr Grover Hai Mr.Grover..sorry for disturbing you again..hmmm if you don't mind to help me again..please response my comment above...because this project deadline in next tuesday so that I really need your help..thank you sir No Handler found happens when you can't connect to the server. Maybe you modified something that you shouldn't modify. The code above only can produce errors but in the server if has something wrong. hmmm okei, but when the code above is removed my server is working again..hahaha..btw , thank you for helping me..:) Can you help me where should i post that code? Is it in project_issue.py ? Yes, inside that file, inside the class that you want to modify You can make your record in two states :-1.)State="OPEN",2.)State="Submitted". Once you submitted the record then the creator of the record cannot edit it , then only Project Manager or the Admin can edit the record. Make a new Group "Can_Edit_Group" in the settings-->Users-->Groups.And add the project manager into it. You need to add state and edit_group field in your _columns in your .py file. _columns={ 'state':fields.selection([('open', 'Open'), ('submitted','Submitted'),],'Status', readonly=True), 'edit_group':fields.many2one('res.groups', string='HR Manager Group'), } def _get_edit_group(self, cr, uid, context=None): all_groups=self.pool.get('res.groups') edit_group = all_groups.browse(cr, uid, all_groups.search(cr,uid,[('name','=','Can_Edit_Group')])[0]).id return edit_group _defaults = { 'edit_group':lambda self, cr, uid, context:self._get_edit_group( cr, uid, context=None) } def change_state(self, cr, uid, ids, context=None): self.write(cr, uid, ids, { 'state' : 'submitted' }) return True In XML add a button to change the record state on click. Put the button inside the header tag. If you are inheriting the View then do it using the Xpath else, put the header tag after you open the form tag. <header> <group attrs="{'invisible':[('state','=','submitted' )]}"> <button name="change_state" string="Submit" type="object" states="open" class="oe_highlight"/> </group> <field name="state" widget="statusbar" statusbar_visible="open,submitted"/> <header> In the record rule you can enter this rule: This will check if the state="submitted" and edit_group i.e the id of the group of whom users can edit the record are "in" the logged in user group.If this condition satisfy then the logged in user can edit the record else not. ['&',('state','=','submitted'),('edit_group','in',[g.id for g in user.groups_id])] Give them all the access i.e check the boolean field read,write,create,delete and add the group into it i.e. add the Can_Edit_Group And don forget to add the your object and give record a name. And check the Active field. I hope this is what you were looking for. any1 can help me? Hey, If you want , you can make your edit button disable for some group of users, without hiding them .Will that will work for you ..? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-hide-edit-button-for-specific-user-in-form-issue-it-s-urgent-somebody-help-me-24933
CC-MAIN-2017-22
refinedweb
914
69.38
Recreate cache.dat file to increase space on disk Hi, I use cache 5.0 and access cache through OpenVMS. The disk-space under a particular namespace has grown up in size due to cache.dat. How to recreate cache.dat to bring up more space back on disk? Wow. 5.0 is roughly 12..15 yrs. back. Could be ^GBLOCKCOPY existed then already in NS %SYS. It should be able to cover your needs and create a new smaller CACHE.DAT in a new directory. Hi... I tried the ^GBLOCKCOPY which seems to copy all the Globals from the particular database to other namespace. The cache.dat under the existing namespace still has the same size. Please let me know how to create a smaller cache.dat using the ^GBLOCKCOPY Thanks! definitely a good idea to run routine ^%GSIZE to find the big consumers and packing directory: c:\intersystems\cache\mgr\user\ Page: 1 GLOBAL SIZE 08 Aug 2017 10:28 AM Global Blocks Bytes Used Packing Contig. -------- -------- --------------- ------- ------- CacheStd 1 140 2 % 0 CacheStdS 1 92 1 % 0 CacheStream 109 764.376 86 % 70 ERRORS 1 12 0 % 0 G1 1 72 1 % 0 Doc for 5.0 is available at At the bottom of that page I found this: Using the Caché GBLOCKCOPY Routine — This article describes the basics of running GBLOCKCOPY to copy globals. The link is to a PDF. One of the use cases in the PDF is as follows: Reclaim unused space in a database If a large global is created then killed in a database, there may be a large excess of unused space in the database. This space can be removed by copying all the globals in the database to a new one, and then replacing the old database with the new database. That's what I think you need to do. It sounds like you used GBLOCKCOPY to copy to a namespace in an existing database rather than to a fresh empty database. Of course, you will need enough disk space to have the smaller new database alongside the big database until you have completed the copying. thx. so you confirmed to me that it was't just wishful thinking It's a very long time since I used GBLOCKCOPY on a Cache 5.0 system, but I think you may be able to simplify your steps. 1. Create a new temporary database via Configuration Manager. You probably don't need a new namespace as well. 2. Use GBLOCKCOPY to copy your old database contents into your new database. 3. Dismount your old database and your new one. Or just shut down Cache completely. 4. Rename your old database (the big one), e.g. to CACHE.oldDAT 5. Move the temporary database's CACHE.DAT file to where the old one was. 6. Mount your database (or start Cache). 7. Use Configuration Manager to delete the temporary database you defined in step 1. 8 Check everything is working. 9. Delete the renamed big database file you preserved in step 4. 10. Come back to DC, tell us how it went, and set the checkmark against one of the answers to show you have accepted it. Thanks. Pleas let me know if the following work plan is correct... Also would a direct DELETE cache.dat work? It's not yet time for GBLOCKCOPY. In namespace %SYS you should find a routine ^GCOMPACT (at least according to docs) Based on the results of %GSIZE you now compact those globals with most blocks and lowest packing. This generates free blocks that will be eliminated during GBLOCKCOPY. Routine ^%FREECNT might help you to follow up your efforts Purpose: Displays the total amount of disk space within a volume group and the amount of free space. HTH Hi... Thanks a lot... So I tried under mine and got the following overall packaging So, would a ^GBLOCKCOPY return back space for me? HI Minu GBLOCKCOPY is used to create a smaller database, after some data is removed and leaves sparse space in the database. If your database is full, then copying it will not save space. Before anything else, you should check the size of your globals and remove any data that you do not need. GBLOCKCOPY should then reclaim some space for you Why do you think that it may help you? Do you know how much data in your database? As 5.0 is so old I even not remember how to check the actual size of data in the Database. But I think you can do the integrity check, and in this report, you will have this information in last rows. And then if you will have less than 10-20% of free space, I think it will not be needed for you. And with this report, you can see how big all globals in this database, and if some useless globals may be unexpectedly exists or have unexpected size. If so, you should check your code, why it could happen and remove it, so you can increase free space, which can be used then for grown useful globals. If you will have so much free space, which you want to free, you can use ^GBLOCKCOPY, to copy all or only useful globals to the new database. To mark your question as "answered" on Developer Community, please click the checkmark alongside the answer you (as author of the question) accept. Social networks InterSystems resources To leave a comment or answer to post please log in Please log in To leave a post please log in
https://community.intersystems.com/post/recreate-cachedat-file-increase-space-disk
CC-MAIN-2020-50
refinedweb
930
81.33
I'm having an issue with connecting to my WLAN after a deep sleep. I'm using MicroPython v1.10-8-g8b7039d7d on 2019-01-26; ESP module with ESP8266 Since the credentials are stored in flash, all I have to do to reconnect to the last AP should be But this only works if it is a hard reset or machine.reset(), not if the ESP is waken up from machine.deepsleep() using RTC IRQ or even esp.deepsleep(time). If I do this however, it will connect to my WLAN again, even after a deep sleep. Code: Select all import network wlan = network.WLAN(network.STA_IF) wlan.active(True) I'm I missing something from the documentation regarding deep sleep and WLAN capabilities? Or is the documentation missing something regarding this? I don't want to be using wlan.connect(..., ...) on every reset as (in my understanding) it will break the flash. Code: Select all import network wlan = network.WLAN(network.STA_IF) wlan.active(True) wlan.connect("myssid", "mypassword") Any suggestions? Thanks in advance! Regards, Jonas
https://forum.micropython.org/viewtopic.php?t=6248
CC-MAIN-2019-39
refinedweb
178
60.41
Express killers, part III In this part we will consider how JVM takes care of reference printing, that are pointing to nowhere, it means null. There is short code sample that prints class name of the object received from collection: import java.util.ArrayList; import java.util.List; public class PrintNull { private static PrintNull o; public static void main(String[] args) { List<PrintNull> list = new ArrayList<PrintNull>(); list.add(o); for (PrintNull i : list) { System.out.println(i.toString()); } } public String toString() { return (this == null) ? "<null>" : super.toString(); } } What will be printed? - null - <null> - different value every time - none of above Very short sample and question: what is it doing? Anything at all? Is it result of "refactoring by removing"? synchronized(obj) { } Answers: First example: Null was added to collection, so it will be taken in loop. Try to call method on the null object will result in throwing exception. It will be safer to use System.err.println(i) instead, what will print "null". Overwritting toString() method is useless. Also check if this is equal to null is pointless (when is it true?). Second example: I found that code once in repository. We were discussing together if it is done by purpose or maybe it is side effect of some refactoring and code removal. At first sight this code is useless, because there is nothing inside. However, if you look closer, we might find potential usage. There is sample usage. Check what will be printed with and without this block in printTrue() method. public class Sync implements Runnable { private static final Object obj = new Object(); private final boolean id; public Sync(boolean id) { this.id = id; } public static void main(String[] args) { new Thread(new Sync(false)).start(); } public void run() { System.err.println("start:" + id); if (id) { printTrue(); } else { printFalse(); } System.err.println("end:" + id); } public void printTrue() { // remove that lines and check results synchronized (obj) { } } public synchronized void printFalse() { synchronized (obj) { try { new Thread(new Sync(true)).start(); wait(2000); System.err.println("print"); } catch (InterruptedException ex) { ex.printStackTrace(); } } } } Calling wait(2000) is done only for purpose of this example, and execution may be different (however 2 secs should be enough to see difference). To sum up, code is weird, however removing it may cause with some new bugs to fix. Translation: Grzegorz Duda Nobody has commented it yet.
http://www.javaexpress.pl/article/show/Express_killers_part_III
CC-MAIN-2021-04
refinedweb
392
60.31